Fei-Fei Li
Sequoia Capital Professor, Denning Co-Director (On Leave) of Stanford HAI, Senior Fellow at HAI and Professor, by courtesy, of Operations, Information and Technology at the Graduate School of Business
Computer Science
Bio
Dr. Fei-Fei Li is the inaugural Sequoia Professor in the Computer Science Department at Stanford University, and Co-Director of Stanford’s Human-Centered AI Institute. She served as the Director of Stanford’s AI Lab from 2013 to 2018. And during her sabbatical from Stanford from January 2017 to September 2018, Dr. Li was Vice President at Google and served as Chief Scientist of AI/ML at Google Cloud. Since then she has served as a Board member or advisor in various public or private companies.
Dr. Fei-Fei Li obtained her B.A. degree in physics from Princeton in 1999 with High Honors, and her PhD degree in electrical engineering from California Institute of Technology (Caltech) in 2005. She also holds a Doctorate Degree (Honorary) from Harvey Mudd College.
Dr. Fei-Fei Li’s current research interests include cognitively inspired AI, machine learning, deep learning, computer vision, robotic learning, and AI+healthcare especially ambient intelligent systems for healthcare delivery. In the past she has also worked on cognitive and computational neuroscience. Dr. Li has published more than 300 scientific articles in top-tier journals and conferences in science, engineering and computer science. Dr. Li is the inventor of ImageNet and the ImageNet Challenge, a critical large-scale dataset and benchmarking effort that has contributed to the latest developments in deep learning and AI. In addition to her technical contributions, she is a national leading voice for advocating diversity in STEM and AI. She is co-founder and chairperson of the national non-profit AI4ALL aimed at increasing inclusion and diversity in AI education.
Dr. Li has been working with policymakers nationally and locally to ensure the responsible use of technologies, including a number of U.S. Senate and Congressional testimonies, her service as a special advisor to the Secretary General of the United Nations, a member of the California Future of Work Commission for the Governor of California in 2019 - 2020, and a member of the National Artificial Intelligence Research Resource Task Force (NAIRR) for the White House Office of Science and Technology Policy (OSTP) and the National Science Foundation (NSF) in 2021-2022.
Dr. Li is an elected Member of the National Academy of Engineering (NAE), the National Academy of Medicine (NAM) and American Academy of Arts and Sciences (AAAS). She is also a Fellow of ACM, a member of the Council on Foreign Relations (CFR), a recipient of the Intel Lifetime Achievements Award in 2023, a recipient of the 2022 IEEE PAMI Thomas Huang Memorial Prize, 2019 IEEE PAMI Longuet-Higgins Prize, 2019 National Geographic Society Further Award, IAPR 2016 J.K. Aggarwal Prize, the 2016 IEEE PAMI Mark Everingham Award, the 2016 nVidia Pioneer in AI Award, 2014 IBM Faculty Fellow Award, 2011 Alfred Sloan Faculty Award, 2009 NSF CAREER award, the 2006 Microsoft Research New Faculty Fellowship, among others. Dr. Li is a keynote speaker at many academic or influential conferences, including the World Economics Forum (Davos), the Grace Hopper Conference 2017 and the TED2015 main conference. Work from Dr. Li's lab have been featured in a variety of magazines and newspapers including New York Times, Wall Street Journal, Fortune Magazine, Science, Wired Magazine, MIT Technology Review, Financial Times, and more. She was selected as a 2017 Women in Tech by the ELLE Magazine, a 2017 Awesome Women Award by Good Housekeeping, a Global Thinker of 2015 by Foreign Policy, and one of the “Great Immigrants: The Pride of America” in 2016 by the Carnegie Foundation, past winners include Albert Einstein, Yoyo Ma, Sergey Brin, et al.
Dr. Fei-Fei Li is the author of the book "The Worlds I See: Curiosity, Exploration and Discovery at the Dawn of AI", published by Macmillan Publishers in 2023.
(Dr. Li publishes under the name L. Fei-Fei)
Academic Appointments
-
Professor, Computer Science
-
Senior Fellow, Institute for Human-Centered Artificial Intelligence (HAI)
-
Professor (By courtesy), Operations, Information & Technology
-
Member, Bio-X
-
Member, Wu Tsai Neurosciences Institute
Administrative Appointments
-
Co-Director, Stanford Institute of Human-Centered AI (HAI) (2019 - Present)
-
Director, Stanford Artificial Intelligence Lab (SAIL) (2013 - 2018)
Honors & Awards
-
Woodrow Wilson Award ("highest honor to an undergraduate alumni"), Princeton University (2024)
-
Intel Lifetime Achievements Innovation Award, Intel (2023)
-
Time AI100, Time Magazine (2023)
-
Thomas S. Huang Memorial Prize, IEEE PAMI (2022)
-
Member, American Academy of Arts and Sciences (AAAS) (2021)
-
Member, National Academy of Medicine (NAM) (2020)
-
Member, National Academy of Engineering (NAE) (2020)
-
Distinguished Alumni Award Winner, California Institute of Technology (2020)
-
Member, Council on Foreign Relations (CFR) (2020)
-
Fellow, ACM (2018)
-
Technical Leadership Abie Award, AnitaB.org (2019)
-
PAMI Longuet-Higgins Prize, IEEE (2019)
-
Further Award, National Geographic Society (2019)
-
Best Paper Award, International Conference on Robotics and Automation (ICRA) (2019)
-
WITI@UC Athena Award for Academic Leadership, U.C. Berkeley (2017)
-
J.K. Aggarwal Prize, International Association for Pattern Recognition (IAPR) (2016)
-
One of the 40 “The great immigrants”, Carnegie Foundation (2016)
-
Mark Everingham Prize, IEEE PAMI (2016)
-
Pioneer in AI Research Award, NVidia (2016)
-
One of the Leading Global Thinkers, Foreign Policy (2015)
-
IBM Faculty Fellowship Award, IBM (2014)
-
Fellowship, Alfred P. Sloan (2011)
-
W.M. Keck Foundation Faculty Scholar, Stanford University (2012)
-
Career Award, NSF (2009)
-
New Faculty Fellowship, Microsoft Research (2006)
-
Best Short Course Prize, IEEE ICCV (2005)
-
Postgraduate Fellowship, National Science Foundation (1999 - 2003)
-
Fellowship for New Americans, Paul and Daisy Soros (1999 - 2003)
-
Martin Dale Fellowship, Princeton University (1999 - 2000)
-
Kusaka Memorial Prize in Physics, Princeton University (1999)
-
Best Paper (Runner-Up), IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2010)
-
W.M. Keck Faculty Scholar, Stanford University (2012-2016)
Boards, Advisory Committees, Professional Organizations
-
Special Advisor to Secretary General, United Nations (2023 - Present)
-
Member, National AI Research Resource Task Force, White House OSTP and NSF (2021 - 2023)
-
Co-Founder/Chairperson of the Board, AI4ALL (non-profit organization for inclusive AI education) (2015 - Present)
-
Fellow, ACM (2014 - Present)
-
Board of Directors, Computer Vision Foundation (non-profit organization supporting computer vision research) (2019 - Present)
-
Senior Member, IEEE (2003 - Present)
Professional Education
-
Ph.D. (Honorary), Harvey Mudd College, Engineering (2022)
-
B.A., Princeton University, Physics (1999)
-
Master, California Institute of Technology, Electrical Engineering (2001)
-
PhD, California Institute of Technology, Electrical Engineering (2005)
Current Research and Scholarly Interests
AI, Machine Learning, Computer Vision, Robotics, AI+Healthcare, Human Vision
2024-25 Courses
- AI-Assisted Care
CS 337, MED 277 (Aut) - Deep Learning for Computer Vision
CS 231N (Spr) -
Independent Studies (17)
- Advanced Reading and Research
CS 499 (Aut, Win, Spr, Sum) - Advanced Reading and Research
CS 499P (Aut, Win, Spr, Sum) - Curricular Practical Training
CS 390A (Aut, Win, Spr, Sum) - Curricular Practical Training
CS 390B (Aut, Win, Spr, Sum) - Curricular Practical Training
CS 390C (Aut, Win, Spr, Sum) - Directed Reading in Neurosciences
NEPR 299 (Aut, Win, Spr, Sum) - Graduate Research
NEPR 399 (Aut, Win, Spr, Sum) - Independent Project
CS 399 (Aut, Win, Spr, Sum) - Independent Project
CS 399P (Aut, Win, Spr, Sum) - Independent Work
CS 199 (Aut, Win, Spr, Sum) - Independent Work
CS 199P (Aut, Win, Spr, Sum) - Master's Research
CME 291 (Aut, Win, Spr, Sum) - Part-time Curricular Practical Training
CS 390D (Aut, Win, Spr, Sum) - Programming Service Project
CS 192 (Aut, Win, Spr, Sum) - Senior Project
CS 191 (Aut, Win, Spr, Sum) - Supervised Undergraduate Research
CS 195 (Aut, Win, Spr, Sum) - Writing Intensive Senior Research Project
CS 191W (Aut, Win, Spr)
- Advanced Reading and Research
-
Prior Year Courses
2023-24 Courses
- Deep Learning for Computer Vision
CS 231N (Spr)
2022-23 Courses
- Deep Learning for Computer Vision
CS 231N (Spr)
2021-22 Courses
- AI-Assisted Care
CS 337, MED 277 (Aut) - Deep Learning for Computer Vision
CS 231N (Spr) - Interactive and Embodied Learning
CS 422, EDUC 234A (Win)
- Deep Learning for Computer Vision
Stanford Advisees
-
Postdoctoral Faculty Sponsor
Ruohan Zhang -
Doctoral Dissertation Advisor (AC)
Josiah Wong, Bohan Wu -
Master's Program Advisor
Ali Hindy, Abi Lopez, Arvind Sridhar, Saba Weatherspoon -
Doctoral Dissertation Co-Advisor (AC)
Michael Lingelbach -
Doctoral (Program)
Keshigeyan Chandrasegaran, Zane Durante, Cem Gokmen, Mohammadmahdi Honarmand, Wenlong Huang, Yunfan Jiang, Chengshu Li, Kyle Sargent, Sanjana Srivastava, Chen Wang, Bohan Wu, Tiange Xiang -
Postdoctoral Research Mentor
Mengdi Xu
All Publications
-
Guest Editorial: Introduction to the Special Section on Graphs in Vision and Pattern Analysis
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
2023; 45 (6): 6867-6869
View details for DOI 10.1109/TPAMI.2023.3259779
View details for Web of Science ID 000982475600018
-
Socially situated artificial intelligence enables learning from human interaction.
Proceedings of the National Academy of Sciences of the United States of America
2022; 119 (39): e2115730119
Abstract
Regardless of how much data artificial intelligence agents have available, agents will inevitably encounter previously unseen situations in real-world deployments. Reacting to novel situations by acquiring new information from other people-socially situated learning-is a core faculty of human development. Unfortunately, socially situated learning remains an open challenge for artificial intelligence agents because they must learn how to interact with people to seek out the information that they lack. In this article, we formalize the task of socially situated artificial intelligence-agents that seek out new information through social interactions with people-as a reinforcement learning problem where the agent learns to identify meaningful and informative questions via rewards observed through social interaction. We manifest our framework as an interactive agent that learns how to ask natural language questions about photos as it broadens its visual intelligence on a large photo-sharing social network. Unlike active-learning methods, which implicitly assume that humans are oracles willing to answer any question, our agent adapts its behavior based on observed norms of which questions people are or are not interested to answer. Through an 8-mo deployment where our agent interacted with 236,000 social media users, our agent improved its performance at recognizing new visual information by 112%. A controlled field experiment confirmed that our agent outperformed an active-learning baseline by 25.6%. This work advances opportunities for continuously improving artificial intelligence (AI) agents that better respect norms in open social environments.
View details for DOI 10.1073/pnas.2115730119
View details for PubMedID 36122244
-
Advances, challenges and opportunities in creating data for trustworthy AI
NATURE MACHINE INTELLIGENCE
2022
View details for DOI 10.1038/s42256-022-00516-1
View details for Web of Science ID 000842575600001
-
Generalizable Task Planning Through Representation Pretraining
IEEE ROBOTICS AND AUTOMATION LETTERS
2022; 7 (3): 8299-8306
View details for DOI 10.1109/LRA.2022.3186635
View details for Web of Science ID 000838409600023
-
Searching for Computer Vision North Stars
DAEDALUS
2022; 151 (2): 85-99
View details for DOI 10.1162/daed_a_01902
View details for Web of Science ID 000786702600006
-
Rethinking Architecture Design for Tackling Data Heterogeneity in Federated Learning
IEEE COMPUTER SOC. 2022: 10051-10061
View details for DOI 10.1109/CVPR52688.2022.00982
View details for Web of Science ID 000870759103013
-
OBJECTFOLDER 2.0: A Multisensory Object Dataset for Sim2Real Transfer
IEEE COMPUTER SOC. 2022: 10588-10598
View details for DOI 10.1109/CVPR52688.2022.01034
View details for Web of Science ID 000870759103065
-
Revisiting the "Video" in Video-Language Understanding
IEEE COMPUTER SOC. 2022: 2907-2917
View details for DOI 10.1109/CVPR52688.2022.00293
View details for Web of Science ID 000867754203017
-
A Study of Face Obfuscation in ImageNet
JMLR-JOURNAL MACHINE LEARNING RESEARCH. 2022
View details for Web of Science ID 000900130206023
-
Embodied intelligence via learning and evolution.
Nature communications
2021; 12 (1): 5721
Abstract
The intertwined processes of learning and evolution in complex environmental niches have resulted in a remarkable diversity of morphological forms. Moreover, many aspects of animal intelligence are deeply embodied in these evolved morphologies. However, the principles governing relations between environmental complexity, evolved morphology, and the learnability of intelligent control, remain elusive, because performing large-scale in silico experiments on evolution and learning is challenging. Here, we introduce Deep Evolutionary Reinforcement Learning (DERL): a computational framework which can evolve diverse agent morphologies to learn challenging locomotion and manipulation tasks in complex environments. Leveraging DERL we demonstrate several relations between environmental complexity, morphological intelligence and the learnability of control. First, environmental complexity fosters the evolution of morphological intelligence as quantified by the ability of a morphology to facilitate the learning of novel tasks. Second, we demonstrate a morphological Baldwin effect i.e., in our simulations evolution rapidly selects morphologies that learn faster, thereby enabling behaviors learned late in the lifetime of early ancestors to be expressed early in the descendants lifetime. Third, we suggest a mechanistic basis for the above relationships through the evolution of morphologies that are more physically stable and energy efficient, and can therefore facilitate learning and control.
View details for DOI 10.1038/s41467-021-25874-z
View details for PubMedID 34615862
-
Scalable Differential Privacy with Sparse Network Finetuning
IEEE COMPUTER SOC. 2021: 5057-5066
View details for DOI 10.1109/CVPR46437.2021.00502
View details for Web of Science ID 000739917305026
-
Neural Event Semantics for Grounded Language Understanding
TRANSACTIONS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS
2021; 9: 875-890
View details for DOI 10.1162/tacl_a_00402
View details for Web of Science ID 000751952200052
-
iGibson 1.0: A Simulation Environment for Interactive Tasks in Large Realistic Scenes
IEEE. 2021: 7520-7527
View details for DOI 10.1109/IROS51168.2021.9636667
View details for Web of Science ID 000755125506005
-
Generalization Through Hand-Eye Coordination: An Action Space for Learning Spatially-Invariant Visuomotor Control
IEEE. 2021: 8913-8920
View details for DOI 10.1109/IROS51168.2021.9636023
View details for Web of Science ID 000755125507002
-
Quantifying Parkinson's disease motor severity under uncertainty using MDS-UPDRS videos.
Medical image analysis
2021; 73: 102179
Abstract
Parkinson's disease (PD) is a brain disorder that primarily affects motor function, leading to slow movement, tremor, and stiffness, as well as postural instability and difficulty with walking/balance. The severity of PD motor impairments is clinically assessed by part III of the Movement Disorder Society Unified Parkinson's Disease Rating Scale (MDS-UPDRS), a universally-accepted rating scale. However, experts often disagree on the exact scoring of individuals. In the presence of label noise, training a machine learning model using only scores from a single rater may introduce bias, while training models with multiple noisy ratings is a challenging task due to the inter-rater variabilities. In this paper, we introduce an ordinal focal neural network to estimate the MDS-UPDRS scores from input videos, to leverage the ordinal nature of MDS-UPDRS scores and combat class imbalance. To handle multiple noisy labels per exam, the training of the network is regularized via rater confusion estimation (RCE), which encodes the rating habits and skills of raters via a confusion matrix. We apply our pipeline to estimate MDS-UPDRS test scores from their video recordings including gait (with multiple Raters, R=3) and finger tapping scores (single rater). On a sizable clinical dataset for the gait test (N=55), we obtained a classification accuracy of 72% with majority vote as ground-truth, and an accuracy of ∼84% of our model predicting at least one of the raters' scores. Our work demonstrates how computer-assisted technologies can be used to track patients and their motor impairments, even when there is uncertainty in the clinical ratings. The latest version of the code will be available at https://github.com/mlu355/PD-Motor-Severity-Estimation.
View details for DOI 10.1016/j.media.2021.102179
View details for PubMedID 34340101
-
Deep Affordance Foresight: Planning Through What Can Be Done in the Future
IEEE. 2021: 6206-6213
View details for DOI 10.1109/ICRA48506.2021.9560841
View details for Web of Science ID 000765738804092
-
Learning Multi-Arm Manipulation Through Collaborative Teleoperation
IEEE. 2021: 9212-9219
View details for DOI 10.1109/ICRA48506.2021.9561491
View details for Web of Science ID 000771405402058
-
Metadata Normalization.
Proceedings. IEEE Computer Society Conference on Computer Vision and Pattern Recognition
2021; 2021: 10912-10922
Abstract
Batch Normalization (BN) and its variants have delivered tremendous success in combating the covariate shift induced by the training step of deep learning methods. While these techniques normalize feature distributions by standardizing with batch statistics, they do not correct the influence on features from extraneous variables or multiple distributions. Such extra variables, referred to as metadata here, may create bias or confounding effects (e.g., race when classifying gender from face images). We introduce the Metadata Normalization (MDN) layer, a new batch-level operation which can be used end-to-end within the training framework, to correct the influence of metadata on feature distributions. MDN adopts a regression analysis technique traditionally used for preprocessing to remove (regress out) the metadata effects on model features during training. We utilize a metric based on distance correlation to quantify the distribution bias from the metadata and demonstrate that our method successfully removes metadata effects on four diverse settings: one synthetic, one 2D image, one video, and one 3D medical image dataset.
View details for DOI 10.1109/cvpr46437.2021.01077
View details for PubMedID 34776724
View details for PubMedCentralID PMC8589298
-
EVALUATING FACIAL RECOGNITION TECHNOLOGY: A PROTOCOL FOR PERFORMANCE ASSESSMENT IN NEW DOMAINS
DENVER LAW REVIEW
2021; 98 (4): 753-773
View details for Web of Science ID 000686480300001
-
Discovering Generalizable Skills via Automated Generation of Diverse Tasks
RSS FOUNDATION-ROBOTICS SCIENCE & SYSTEMS FOUNDATION. 2021
View details for Web of Science ID 000684604200010
-
Representation Learning with Statistical Independence to Mitigate Bias.
IEEE Winter Conference on Applications of Computer Vision. IEEE Winter Conference on Applications of Computer Vision
2021; 2021: 2512-2522
Abstract
Presence of bias (in datasets or tasks) is inarguably one of the most critical challenges in machine learning applications that has alluded to pivotal debates in recent years. Such challenges range from spurious associations between variables in medical studies to the bias of race in gender or face recognition systems. Controlling for all types of biases in the dataset curation stage is cumbersome and sometimes impossible. The alternative is to use the available data and build models incorporating fair representation learning. In this paper, we propose such a model based on adversarial training with two competing objectives to learn features that have (1) maximum discriminative power with respect to the task and (2) minimal statistical mean dependence with the protected (bias) variable(s). Our approach does so by incorporating a new adversarial loss function that encourages a vanished correlation between the bias and the learned features. We apply our method to synthetic data, medical images (containing task bias), and a dataset for gender classification (containing dataset bias). Our results show that the learned features by our method not only result in superior prediction performance but also are unbiased.
View details for DOI 10.1109/wacv48630.2021.00256
View details for PubMedID 34522832
-
SECANT: Self-Expert Cloning for Zero-Shot Generalization of Visual Policies
JMLR-JOURNAL MACHINE LEARNING RESEARCH. 2021
View details for Web of Science ID 000683104603010
-
Greedy Hierarchical Variational Autoencoders for Large-Scale Video Prediction
IEEE COMPUTER SOC. 2021: 2318-2328
View details for DOI 10.1109/CVPR46437.2021.00235
View details for Web of Science ID 000739917302051
-
Ethical issues in using ambient intelligence in health-care settings.
The Lancet. Digital health
2020
Abstract
Ambient intelligence is increasingly finding applications in health-care settings, such as helping to ensure clinician and patient safety by monitoring staff compliance with clinical best practices or relieving staff of burdensome documentation tasks. Ambient intelligence involves using contactless sensors and contact-based wearable devices embedded in health-care settings to collect data (eg, imaging data of physical spaces, audio data, or body temperature), coupled with machine learning algorithms to efficiently and effectively interpret these data. Despite the promise of ambient intelligence to improve quality of care, the continuous collection of large amounts of sensor data in health-care settings presents ethical challenges, particularly in terms of privacy, data management, bias and fairness, and informed consent. Navigating these ethical issues is crucial not only for the success of individual uses, but for acceptance of the field as a whole.
View details for DOI 10.1016/S2589-7500(20)30275-2
View details for PubMedID 33358138
-
AI will change the world, so it's time to change AI
NATURE
2020; 588 (7837): S118
View details for Web of Science ID 000624296000019
View details for PubMedID 33299217
-
Vision-based Estimation of MDS-UPDRS Gait Scores for Assessing Parkinson's Disease Motor Severity.
Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention
2020; 12263: 637–47
Abstract
Parkinson's disease (PD) is a progressive neurological disorder primarily affecting motor function resulting in tremor at rest, rigidity, bradykinesia, and postural instability. The physical severity of PD impairments can be quantified through the Movement Disorder Society Unified Parkinson's Disease Rating Scale (MDS-UPDRS), a widely used clinical rating scale. Accurate and quantitative assessment of disease progression is critical to developing a treatment that slows or stops further advancement of the disease. Prior work has mainly focused on dopamine transport neuroimaging for diagnosis or costly and intrusive wearables evaluating motor impairments. For the first time, we propose a computer vision-based model that observes non-intrusive video recordings of individuals, extracts their 3D body skeletons, tracks them through time, and classifies the movements according to the MDS-UPDRS gait scores. Experimental results show that our proposed method performs significantly better than chance and competing methods with an F 1-score of 0.83 and a balanced accuracy of 81%. This is the first benchmark for classifying PD patients based on MDS-UPDRS gait severity and could be an objective biomarker for disease severity. Our work demonstrates how computer-assisted technologies can be used to non-intrusively monitor patients and their motor impairments. The code is available at https://github.com/mlu355/PD-Motor-Severity-Estimation.
View details for DOI 10.1007/978-3-030-59716-0_61
View details for PubMedID 33103164
-
Assessing the accuracy of automatic speech recognition for psychotherapy
NPJ DIGITAL MEDICINE
2020; 3 (1)
View details for DOI 10.1038/s41746-020-0285-8
View details for Web of Science ID 000537719700001
-
Making Sense of Vision and Touch: Learning Multimodal Representations for Contact-Rich Tasks
IEEE TRANSACTIONS ON ROBOTICS
2020; 36 (3): 582–96
View details for DOI 10.1109/TRO.2019.2959445
View details for Web of Science ID 000543027200001
-
Illuminating the dark spaces of healthcare with ambient intelligence.
Nature
2020; 585 (7824): 193–202
Abstract
Advances in machine learning and contactless sensors have given rise to ambient intelligence-physical spaces that are sensitive and responsive to the presence of humans. Here we review how this technology could improve our understanding of the metaphorically dark, unobserved spaces of healthcare. In hospital spaces, early applications could soon enable more efficient clinical workflows and improved patient safety in intensive care units and operating rooms. In daily living spaces, ambient intelligence could prolong the independence of older individuals and improve the management of individuals with a chronic disease by understanding everyday behaviour. Similar to other technologies, transformation into clinical applications at scale must overcome challenges such as rigorous clinical validation, appropriate data privacy and model transparency. Thoughtful use of this technology would enable us to understand the complex interplay between the physical environment and health-critical human behaviours.
View details for DOI 10.1038/s41586-020-2669-y
View details for PubMedID 32908264
-
Motion Reasoning for Goal-Based Imitation Learning
IEEE. 2020: 4878-4884
View details for Web of Science ID 000712319503063
-
KETO: Learning Keypoint Representations for Tool Manipulation
IEEE. 2020: 7278-7285
View details for Web of Science ID 000712319504125
-
IRIS: Implicit Reinforcement without Interaction at Scale for Learning Control from Offline Robot Manipulation Data
IEEE. 2020: 4414-4420
View details for Web of Science ID 000712319503014
-
ADAPT: Zero-Shot Adaptive Policy Transfer for Stochastic Dynamical Systems
SPRINGER INTERNATIONAL PUBLISHING AG. 2020: 437–53
View details for DOI 10.1007/978-3-030-28619-4_34
View details for Web of Science ID 000632686200034
-
Scene Graph Prediction with Limited Labels.
Proceedings. IEEE International Conference on Computer Vision
2020; 2019: 2580–90
Abstract
Visual knowledge bases such as Visual Genome power numerous applications in computer vision, including visual question answering and captioning, but suffer from sparse, incomplete relationships. All scene graph models to date are limited to training on a small set of visual relationships that have thousands of training labels each. Hiring human annotators is expensive, and using textual knowledge base completion methods are incompatible with visual data. In this paper, we introduce a semi-supervised method that assigns probabilistic relationship labels to a large number of unlabeled images using few' labeled examples. We analyze visual relationships to suggest two types of image-agnostic features that are used to generate noisy heuristics, whose outputs are aggregated using a factor graph-based generative model. With as few as 10 labeled examples per relationship, the generative model creates enough training data to train any existing state-of-the-art scene graph model. We demonstrate that our method outperforms all baseline approaches on scene graph prediction by 5.16 recall@ 100 for PREDCLS. In our limited label setting, we define a complexity metric for relationships that serves as an indicator (R2 = 0.778) for conditions under which our method succeeds over transfer learning, the de-facto approach for training with limited labels.
View details for DOI 10.1109/iccv.2019.00267
View details for PubMedID 32218709
View details for PubMedCentralID PMC7098690
-
GTI: Learning to Generalize Across Long-Horizon Tasks from Human Demonstrations
MIT PRESS. 2020
View details for Web of Science ID 000570976900061
-
Towards Fairer Datasets: Filtering and Balancing the Distribution of the People Subtree in the ImageNet Hierarchy
ASSOC COMPUTING MACHINERY. 2020: 547–58
View details for DOI 10.1145/3351095.3375709
View details for Web of Science ID 000620151400068
-
Automatic detection of hand hygiene using computer vision technology.
Journal of the American Medical Informatics Association : JAMIA
2020
Abstract
Hand hygiene is essential for preventing hospital-acquired infections but is difficult to accurately track. The gold-standard (human auditors) is insufficient for assessing true overall compliance. Computer vision technology has the ability to perform more accurate appraisals. Our primary objective was to evaluate if a computer vision algorithm could accurately observe hand hygiene dispenser use in images captured by depth sensors.Sixteen depth sensors were installed on one hospital unit. Images were collected continuously from March to August 2017. Utilizing a convolutional neural network, a machine learning algorithm was trained to detect hand hygiene dispenser use in the images. The algorithm's accuracy was then compared with simultaneous in-person observations of hand hygiene dispenser usage. Concordance rate between human observation and algorithm's assessment was calculated. Ground truth was established by blinded annotation of the entire image set. Sensitivity and specificity were calculated for both human and machine-level observation.A concordance rate of 96.8% was observed between human and algorithm (kappa = 0.85). Concordance among the 3 independent auditors to establish ground truth was 95.4% (Fleiss's kappa = 0.87). Sensitivity and specificity of the machine learning algorithm were 92.1% and 98.3%, respectively. Human observations showed sensitivity and specificity of 85.2% and 99.4%, respectively.A computer vision algorithm was equivalent to human observation in detecting hand hygiene dispenser use. Computer vision monitoring has the potential to provide a more complete appraisal of hand hygiene activity in hospitals than the current gold-standard given its ability for continuous coverage of a unit in space and time.
View details for DOI 10.1093/jamia/ocaa115
View details for PubMedID 32712656
-
Automated abnormality detection in lower extremity radiographs using deep learning
NATURE MACHINE INTELLIGENCE
2019; 1 (12): 578–83
View details for DOI 10.1038/s42256-019-0126-0
View details for Web of Science ID 000571267000009
-
Learning task-oriented grasping for tool manipulation from simulated self-supervision
INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH
2019
View details for DOI 10.1177/0278364919872545
View details for Web of Science ID 000484533500001
-
A computer vision system for deep learning-based detection of patient mobilization activities in the ICU
NPJ DIGITAL MEDICINE
2019; 2
View details for DOI 10.1038/s41746-019-0087-z
View details for Web of Science ID 000462450700001
-
Neural Task Graphs: Generalizing to Unseen Tasks from a Single Video Demonstration
IEEE. 2019: 8557–66
View details for DOI 10.1109/CVPR.2019.00876
View details for Web of Science ID 000542649302018
-
Situational Fusion of Visual Representation for Visual Navigation
IEEE COMPUTER SOC. 2019: 2881–90
View details for DOI 10.1109/ICCV.2019.00297
View details for Web of Science ID 000531438103003
-
Making Sense of Vision and Touch: Self-Supervised Learning of Multimodal Representations for Contact-Rich Tasks
IEEE. 2019: 8943–50
View details for Web of Science ID 000494942306083
-
Scaling Robot Supervision to Hundreds of Hours with RoboTurk: Robotic Manipulation Dataset through Human Reasoning and Dexterity
IEEE. 2019: 1048–55
View details for Web of Science ID 000544658400114
-
Continuous Relaxation of Symbolic Planner for One-Shot Imitation Learning
IEEE. 2019: 2635–42
View details for Web of Science ID 000544658402034
-
HYPE: A Benchmark for Human eYe Perceptual Evaluation of Generative Models
NEURAL INFORMATION PROCESSING SYSTEMS (NIPS). 2019
View details for Web of Science ID 000534424303044
-
AUDIO-LINGUISTIC EMBEDDINGS FOR SPOKEN SENTENCES
IEEE. 2019: 7355–59
View details for Web of Science ID 000482554007118
-
Visual Relationships as Functions: Enabling Few-Shot Scene Graph Prediction
IEEE COMPUTER SOC. 2019: 1730–39
View details for DOI 10.1109/ICCVW.2019.00214
View details for Web of Science ID 000554591601096
-
Composing Text and Image for Image Retrieval - An Empirical Odyssey
IEEE COMPUTER SOC. 2019: 6432–41
View details for DOI 10.1109/CVPR.2019.00660
View details for Web of Science ID 000529484006064
-
Information Maximizing Visual Question Generation
IEEE COMPUTER SOC. 2019: 2008–18
View details for DOI 10.1109/CVPR.2019.00211
View details for Web of Science ID 000529484002018
-
Auto-DeepLab: Hierarchical Neural Architecture Search for Semantic Image Segmentation
IEEE COMPUTER SOC. 2019: 82–92
View details for DOI 10.1109/CVPR.2019.00017
View details for Web of Science ID 000529484000009
-
Peeking into the Future: Predicting Future Person Activities and Locations in Videos
IEEE COMPUTER SOC. 2019: 5718–27
View details for DOI 10.1109/CVPR.2019.00587
View details for Web of Science ID 000529484005092
-
DenseFusion: 6D Object Pose Estimation by Iterative Dense Fusion
IEEE COMPUTER SOC. 2019: 3338–47
View details for DOI 10.1109/CVPR.2019.00346
View details for Web of Science ID 000529484003053
-
Scene Memory Transformer for Embodied Agents in Long-Horizon Tasks
IEEE COMPUTER SOC. 2019: 538–47
View details for DOI 10.1109/CVPR.2019.00063
View details for Web of Science ID 000529484000055
-
(DTW)-T-3: Discriminative Differentiable Dynamic Time Warping for Weakly Supervised Action Alignment and Segmentation
IEEE COMPUTER SOC. 2019: 3541–50
View details for DOI 10.1109/CVPR.2019.00366
View details for Web of Science ID 000529484003071
-
Peeking into the Future: Predicting Future Person Activities and Locations in Videos
IEEE. 2019: 2960–63
View details for DOI 10.1109/CVPRW.2019.00358
View details for Web of Science ID 000569983600352
-
Every Moment Counts: Dense Detailed Labeling of Actions in Complex Videos
INTERNATIONAL JOURNAL OF COMPUTER VISION
2018; 126 (2-4): 375–89
View details for DOI 10.1007/s11263-017-1013-y
View details for Web of Science ID 000425619100013
-
Distinct contributions of functional and deep neural network features to representational similarity of scenes in human brain and behavior
ELIFE
2018; 7
Abstract
Inherent correlations between visual and semantic features in real-world scenes make it difficult to determine how different scene properties contribute to neural representations. Here, we assessed the contributions of multiple properties to scene representation by partitioning the variance explained in human behavioral and brain measurements by three feature models whose inter-correlations were minimized a priori through stimulus preselection. Behavioral assessments of scene similarity reflected unique contributions from a functional feature model indicating potential actions in scenes as well as high-level visual features from a deep neural network (DNN). In contrast, similarity of cortical responses in scene-selective areas was uniquely explained by mid- and high-level DNN features only, while an object label model did not contribute uniquely to either domain. The striking dissociation between functional and DNN features in their contribution to behavioral and brain representations of scenes indicates that scene-selective cortex represents only a subset of behaviorally relevant scene information.
View details for PubMedID 29513219
-
Bedside Computer Vision - Moving Artificial Intelligence from Driver Assistance to Patient Safety.
The New England journal of medicine
2018; 378 (14): 1271–73
View details for PubMedID 29617592
-
Temporal Modular Networks for Retrieving Complex Compositional Activities in Videos
European Conference on Computer Vision
2018: 569–86
View details for DOI 10.1007/978-3-030-01219-9_34
-
Progressive Neural Architecture Search
SPRINGER INTERNATIONAL PUBLISHING AG. 2018: 19-35
View details for DOI 10.1007/978-3-030-01246-5_2
View details for Web of Science ID 000594203000002
-
Neural Graph Matching Networks for Fewshot 3D Action Recognition
SPRINGER INTERNATIONAL PUBLISHING AG. 2018: 673-689
View details for DOI 10.1007/978-3-030-01246-5_40
View details for Web of Science ID 000594203000040
-
Learning Task-Oriented Grasping for Tool Manipulation from Simulated Self-Supervision
MIT PRESS. 2018
View details for Web of Science ID 000570976700012
-
Dynamic Task Prioritization for Multitask Learning
SPRINGER INTERNATIONAL PUBLISHING AG. 2018: 282-299
View details for DOI 10.1007/978-3-030-01270-0_17
View details for Web of Science ID 000603403700017
-
HiDDeN: Hiding Data With Deep Networks
SPRINGER INTERNATIONAL PUBLISHING AG. 2018: 682-697
View details for DOI 10.1007/978-3-030-01267-0_40
View details for Web of Science ID 000612999000040
-
Graph Distillation for Action Detection with Privileged Modalities
SPRINGER INTERNATIONAL PUBLISHING AG. 2018: 174–92
View details for DOI 10.1007/978-3-030-01264-9_11
View details for Web of Science ID 000604454400011
-
Image Generation from Scene Graphs
IEEE. 2018: 1219–28
View details for DOI 10.1109/CVPR.2018.00133
View details for Web of Science ID 000457843601036
-
Social GAN: Socially Acceptable Trajectories with Generative Adversarial Networks
IEEE. 2018: 2255–64
View details for DOI 10.1109/CVPR.2018.00240
View details for Web of Science ID 000457843602040
-
Referring Relationships
IEEE. 2018: 6867–76
View details for DOI 10.1109/CVPR.2018.00718
View details for Web of Science ID 000457843607003
-
Finding "It": Weakly-Supervised Reference-Aware Visual Grounding in Instructional Videos
IEEE. 2018: 5948–57
View details for DOI 10.1109/CVPR.2018.00623
View details for Web of Science ID 000457843606011
-
What Makes a Video a Video: Analyzing Temporal Information in Video Understanding Models and Datasets
IEEE. 2018: 7366–75
View details for DOI 10.1109/CVPR.2018.00769
View details for Web of Science ID 000457843607054
-
Neural Task Programming: Learning to Generalize Across Hierarchical Tasks
IEEE COMPUTER SOC. 2018: 3795–3802
View details for Web of Science ID 000446394502137
-
Learning to Play With Intrinsically-Motivated, Self-Aware Agents
NEURAL INFORMATION PROCESSING SYSTEMS (NIPS). 2018
View details for Web of Science ID 000461852002089
-
Flexible Neural Representation for Physics Prediction
NEURAL INFORMATION PROCESSING SYSTEMS (NIPS). 2018
View details for Web of Science ID 000461852003036
-
Engagement Learning: Expanding Visual Knowledge by Engaging Online Participants
ASSOC COMPUTING MACHINERY. 2018: 87–89
View details for DOI 10.1145/3266037.3266110
View details for Web of Science ID 000494261200029
-
Tool Detection and Operative Skill Assessment in Surgical Videos Using Region-Based Convolutional Neural Networks
IEEE. 2018: 691–99
View details for DOI 10.1109/WACV.2018.00081
View details for Web of Science ID 000434349200075
-
Scaling Human-Object Interaction Recognition through Zero-Shot Learning
IEEE. 2018: 1568–76
View details for DOI 10.1109/WACV.2018.00181
View details for Web of Science ID 000434349200169
-
Using deep learning and Google Street View to estimate the demographic makeup of neighborhoods across the United States
PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA
2017; 114 (50): 13108–13
Abstract
The United States spends more than $250 million each year on the American Community Survey (ACS), a labor-intensive door-to-door study that measures statistics relating to race, gender, education, occupation, unemployment, and other demographic factors. Although a comprehensive source of data, the lag between demographic changes and their appearance in the ACS can exceed several years. As digital imagery becomes ubiquitous and machine vision techniques improve, automated data analysis may become an increasingly practical supplement to the ACS. Here, we present a method that estimates socioeconomic characteristics of regions spanning 200 US cities by using 50 million images of street scenes gathered with Google Street View cars. Using deep learning-based computer vision techniques, we determined the make, model, and year of all motor vehicles encountered in particular neighborhoods. Data from this census of motor vehicles, which enumerated 22 million automobiles in total (8% of all automobiles in the United States), were used to accurately estimate income, race, education, and voting patterns at the zip code and precinct level. (The average US precinct contains ∼1,000 people.) The resulting associations are surprisingly simple and powerful. For instance, if the number of sedans encountered during a drive through a city is higher than the number of pickup trucks, the city is likely to vote for a Democrat during the next presidential election (88% chance); otherwise, it is likely to vote Republican (82%). Our results suggest that automated systems for monitoring demographics may effectively complement labor-intensive approaches, with the potential to measure demographics with fine spatial resolution, in close to real time.
View details for PubMedID 29183967
-
u Fei-Fei Li
TECHNOLOGY REVIEW
2017; 120 (6): 26-27
View details for Web of Science ID 000414118100016
-
Evidence for similar patterns of neural activity elicited by picture- and word-based representations of natural scenes
NEUROIMAGE
2017; 155: 422–36
Abstract
A long-standing core question in cognitive science is whether different modalities and representation types (pictures, words, sounds, etc.) access a common store of semantic information. Although different input types have been shown to activate a shared network of brain regions, this does not necessitate that there is a common representation, as the neurons in these regions could still differentially process the different modalities. However, multi-voxel pattern analysis can be used to assess whether, e.g., pictures and words evoke a similar pattern of activity, such that the patterns that separate categories in one modality transfer to the other. Prior work using this method has found support for a common code, but has two limitations: they have either only examined disparate categories (e.g. animals vs. tools) that are known to activate different brain regions, raising the possibility that the pattern separation and inferred similarity reflects only large scale differences between the categories or they have been limited to individual object representations. By using natural scene categories, we not only extend the current literature on cross-modal representations beyond objects, but also, because natural scene categories activate a common set of brain regions, we identify a more fine-grained (i.e. higher spatial resolution) common representation. Specifically, we studied picture- and word-based representations of natural scene stimuli from four different categories: beaches, cities, highways, and mountains. Participants passively viewed blocks of either phrases (e.g. "sandy beach") describing scenes or photographs from those same scene categories. To determine whether the phrases and pictures evoke a common code, we asked whether a classifier trained on one stimulus type (e.g. phrase stimuli) would transfer (i.e. cross-decode) to the other stimulus type (e.g. picture stimuli). The analysis revealed cross-decoding in the occipitotemporal, posterior parietal and frontal cortices. This similarity of neural activity patterns across the two input types, for categories that co-activate local brain regions, provides strong evidence of a common semantic code for pictures and words in the brain.
View details for PubMedID 28343000
-
Visual Genome: Connecting Language and Vision Using Crowdsourced Dense Image Annotations
INTERNATIONAL JOURNAL OF COMPUTER VISION
2017; 123 (1): 32-73
View details for DOI 10.1007/s11263-016-0981-7
View details for Web of Science ID 000400276400003
-
Deep Visual-Semantic Alignments for Generating Image Descriptions
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
2017; 39 (4): 664–76
Abstract
We present a model that generates natural language descriptions of images and their regions. Our approach leverages datasets of images and their sentence descriptions to learn about the inter-modal correspondences between language and visual data. Our alignment model is based on a novel combination of Convolutional Neural Networks over image regions, bidirectional Recurrent Neural Networks (RNN) over sentences, and a structured objective that aligns the two modalities through a multimodal embedding. We then describe a Multimodal Recurrent Neural Network architecture that uses the inferred alignments to learn to generate novel descriptions of image regions. We demonstrate that our alignment model produces state of the art results in retrieval experiments on Flickr8K, Flickr30K and MSCOCO datasets. We then show that the generated descriptions outperform retrieval baselines on both full images and on a new dataset of region-level annotations. Finally, we conduct large-scale analysis of our RNN language model on the Visual Genome dataset of 4.1 million captions and highlight the differences between image and region-level caption statistics.
View details for DOI 10.1109/TPAMI.2016.2598339
View details for Web of Science ID 000397717600005
View details for PubMedID 27514036
-
Characterizing and Improving Stability in Neural Style Transfer
IEEE. 2017: 4087–96
View details for DOI 10.1109/ICCV.2017.438
View details for Web of Science ID 000425498404017
-
Learning to Predict Human Behavior in Crowded Scenes
GROUP AND CROWD BEHAVIOR FOR COMPUTER VISION
2017: 183-207
View details for DOI 10.1016/B978-0-12-809276-7.00011-4
View details for Web of Science ID 000416037000009
-
Tracking Millions of Humans in Crowded Spaces
GROUP AND CROWD BEHAVIOR FOR COMPUTER VISION
2017: 115-135
View details for DOI 10.1016/B978-0-12-809276-7.00007-2
View details for Web of Science ID 000416037000006
-
Fine-Grained Car Detection for Visual Census Estimation
ASSOC ADVANCEMENT ARTIFICIAL INTELLIGENCE. 2017: 4502-4508
View details for Web of Science ID 000485630704077
-
Human-Object Interactions Are More than the Sum of Their Parts.
Cerebral cortex (New York, N.Y. : 1991)
2017; 27 (3): 2276–88
Abstract
Understanding human-object interactions is critical for extracting meaning from everyday visual scenes and requires integrating complex relationships between human pose and object identity into a new percept. To understand how the brain builds these representations, we conducted 2 fMRI experiments in which subjects viewed humans interacting with objects, noninteracting human-object pairs, and isolated humans and objects. A number of visual regions process features of human-object interactions, including object identity information in the lateral occipital complex (LOC) and parahippocampal place area (PPA), and human pose information in the extrastriate body area (EBA) and posterior superior temporal sulcus (pSTS). Representations of human-object interactions in some regions, such as the posterior PPA (retinotopic maps PHC1 and PHC2) are well predicted by a simple linear combination of the response to object and pose information. Other regions, however, especially pSTS, exhibit representations for human-object interaction categories that are not predicted by their individual components, indicating that they encode human-object interactions as more than the sum of their parts. These results reveal the distributed networks underlying the emergent representation of human-object interactions necessary for social perception.
View details for PubMedID 27073216
View details for PubMedCentralID PMC5963823
-
Categorization influences detection: A perceptual advantage for representative exemplars of natural scene categories.
Journal of vision
2017; 17 (1): 21-?
Abstract
Traditional models of recognition and categorization proceed from registering low-level features, perceptually organizing that input, and linking it with stored representations. Recent evidence, however, suggests that this serial model may not be accurate, with object and category knowledge affecting rather than following early visual processing. Here, we show that the degree to which an image exemplifies its category influences how easily it is detected. Participants performed a two-alternative forced-choice task in which they indicated whether a briefly presented image was an intact or phase-scrambled scene photograph. Critically, the category of the scene is irrelevant to the detection task. We nonetheless found that participants "see" good images better, more accurately discriminating them from phase-scrambled images than bad scenes, and this advantage is apparent regardless of whether participants are asked to consider category during the experiment or not. We then demonstrate that good exemplars are more similar to same-category images than bad exemplars, influencing behavior in two ways: First, prototypical images are easier to detect, and second, intact good scenes are more likely than bad to have been primed by a previous trial.
View details for DOI 10.1167/17.1.21
View details for PubMedID 28114496
-
Jointly Learning Energy Expenditures and Activities using Egocentric Multimodal Signals
IEEE. 2017: 6817–26
View details for DOI 10.1109/CVPR.2017.721
View details for Web of Science ID 000418371406096
-
Scene Graph Generation by Iterative Message Passing
IEEE. 2017: 3097–3106
View details for DOI 10.1109/CVPR.2017.330
View details for Web of Science ID 000418371403019
-
Unsupervised Learning of Long-Term Motion Dynamics for Videos
IEEE. 2017: 7101–10
View details for DOI 10.1109/CVPR.2017.751
View details for Web of Science ID 000418371407022
-
Inferring and Executing Programs for Visual Reasoning
IEEE. 2017: 3008–17
View details for DOI 10.1109/ICCV.2017.325
View details for Web of Science ID 000425498403008
-
Unsupervised Visual-Linguistic Reference Resolution in Instructional Videos
IEEE. 2017: 1032–41
View details for DOI 10.1109/CVPR.2017.116
View details for Web of Science ID 000418371401011
-
CLEVR: A Diagnostic Dataset for Compositional Language and Elementary Visual Reasoning
IEEE. 2017: 1988–97
View details for DOI 10.1109/CVPR.2017.215
View details for Web of Science ID 000418371402007
-
A Hierarchical Approach for Generating Descriptive Image Paragraphs
IEEE. 2017: 3337–45
View details for DOI 10.1109/CVPR.2017.356
View details for Web of Science ID 000418371403045
-
Knowledge Acquisition for Visual Question Answering via Iterative Querying
IEEE. 2017: 6146–55
View details for DOI 10.1109/CVPR.2017.651
View details for Web of Science ID 000418371406026
-
Adversarially Robust Policy Learning: Active Construction of Physically-Plausible Perturbations
IEEE. 2017: 3932–39
View details for Web of Science ID 000426978203126
-
Scalable Annotation of Fine-Grained Categories Without Experts
ASSOC COMPUTING MACHINERY. 2017: 1877–81
View details for DOI 10.1145/3025453.3025930
View details for Web of Science ID 000426970501077
-
Learning to Learn from Noisy Web Videos
IEEE. 2017: 7455–63
View details for DOI 10.1109/CVPR.2017.788
View details for Web of Science ID 000418371407059
-
Dense-Captioning Events in Videos
IEEE. 2017: 706–15
View details for DOI 10.1109/ICCV.2017.83
View details for Web of Science ID 000425498400074
-
Fine-grained Recognition in the Wild: A Multi-Task Domain Adaptation Approach
IEEE. 2017: 1358–67
View details for DOI 10.1109/ICCV.2017.151
View details for Web of Science ID 000425498401044
-
Visual Semantic Planning using Deep Successor Representations
IEEE. 2017: 483–92
View details for DOI 10.1109/ICCV.2017.60
View details for Web of Science ID 000425498400051
-
Two Distinct Scene-Processing Networks Connecting Vision and Memory.
eNeuro
2016; 3 (5)
Abstract
A number of regions in the human brain are known to be involved in processing natural scenes, but the field has lacked a unifying framework for understanding how these different regions are organized and interact. We provide evidence from functional connectivity and meta-analyses for a new organizational principle, in which scene processing relies upon two distinct networks that split the classically defined parahippocampal place area (PPA). The first network of strongly connected regions consists of the occipital place area/transverse occipital sulcus and posterior PPA, which contain retinotopic maps and are not strongly coupled to the hippocampus at rest. The second network consists of the caudal inferior parietal lobule, retrosplenial complex, and anterior PPA, which connect to the hippocampus (especially anterior hippocampus), and are implicated in both visual and nonvisual tasks, including episodic memory and navigation. We propose that these two distinct networks capture the primary functional division among scene-processing regions, between those that process visual features from the current view of a scene and those that connect information from a current scene view with a much broader temporal and spatial context. This new framework for understanding the neural substrates of scene-processing bridges results from many lines of research, and makes specific functional predictions.
View details for PubMedID 27822493
-
Two Distinct Scene-Processing Networks Connecting Vision and Memory
ENEURO
2016; 3 (5)
View details for DOI 10.1523/ENEURO.0178-16.2016
View details for Web of Science ID 000391930400022
-
Typicality sharpens category representations in object-selective cortex
NEUROIMAGE
2016; 134: 170-179
Abstract
The purpose of categorization is to identify generalizable classes of objects whose members can be treated equivalently. Within a category, however, some exemplars are more representative of that concept than others. Despite long-standing behavioral effects, little is known about how typicality influences the neural representation of real-world objects from the same category. Using fMRI, we showed participants 64 subordinate object categories (exemplars) grouped into 8 basic categories. Typicality for each exemplar was assessed behaviorally and we used several multi-voxel pattern analyses to characterize how typicality affects the pattern of responses elicited in early visual and object-selective areas: V1, V2, V3v, hV4, LOC. We found that in LOC, but not in early areas, typical exemplars elicited activity more similar to the central category tendency and created sharper category boundaries than less typical exemplars, suggesting that typicality enhances within-category similarity and between-category dissimilarity. Additionally, we uncovered a brain region (cIPL) where category boundaries favor less typical categories. Our results suggest that typicality may constitute a previously unexplored principle of organization for intra-category neural structure and, furthermore, that this representation is not directly reflected in image features describing natural input, but rather built by the visual system at an intermediate processing stage.
View details for DOI 10.1016/j.neuroimage.2016.04.012
View details for Web of Science ID 000378045900017
View details for PubMedID 27079531
View details for PubMedCentralID PMC4912889
-
Leveraging the Wisdom of the Crowd for Fine-Grained Recognition
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
2016; 38 (4): 666-676
Abstract
Fine-grained recognition concerns categorization at sub-ordinate levels, where the distinction between object classes is highly local. Compared to basic level recognition, fine-grained categorization can be more challenging as there are in general less data and fewer discriminative features. This necessitates the use of a stronger prior for feature selection. In this work, we include humans in the loop to help computers select discriminative features. We introduce a novel online game called "Bubbles" that reveals discriminative features humans use. The player's goal is to identify the category of a heavily blurred image. During the game, the player can choose to reveal full details of circular regions ("bubbles"), with a certain penalty. With proper setup the game generates discriminative bubbles with assured quality. We next propose the "BubbleBank" representation that uses the human selected bubbles to improve machine recognition performance. Finally, we demonstrate how to extend BubbleBank to a view-invariant 3D representation. Experiments demonstrate that our approach yields large improvements over the previous state of the art on challenging benchmarks.
View details for DOI 10.1109/TPAMI.2015.2439285
View details for Web of Science ID 000372549700005
-
Visual scenes are categorized by function.
Journal of experimental psychology. General
2016; 145 (1): 82-94
Abstract
How do we know that a kitchen is a kitchen by looking? Traditional models posit that scene categorization is achieved through recognizing necessary and sufficient features and objects, yet there is little consensus about what these may be. However, scene categories should reflect how we use visual information. Therefore, we test the hypothesis that scene categories reflect functions, or the possibilities for actions within a scene. Our approach is to compare human categorization patterns with predictions made by both functions and alternative models. We collected a large-scale scene category distance matrix (5 million trials) by asking observers to simply decide whether 2 images were from the same or different categories. Using the actions from the American Time Use Survey, we mapped actions onto each scene (1.4 million trials). We found a strong relationship between ranked category distance and functional distance (r = .50, or 66% of the maximum possible correlation). The function model outperformed alternative models of object-based distance (r = .33), visual features from a convolutional neural network (r = .39), lexical distance (r = .27), and models of visual features. Using hierarchical linear regression, we found that functions captured 85.5% of overall explained variance, with nearly half of the explained variance captured only by functions, implying that the predictive power of alternative models was because of their shared variance with the function-based model. These results challenge the dominant school of thought that visual features and objects are sufficient for scene categorization, suggesting instead that a scene's category may be determined by the scene's function.
View details for DOI 10.1037/xge0000129
View details for PubMedID 26709590
View details for PubMedCentralID PMC4693295
-
End-to-end Learning of Action Detection from Frame Glimpses in Video
Computer Vision and Pattern Recognition
2016: 2678–87
View details for DOI 10.1109/cvpr.2016.293
-
Towards Viewpoint Invariant 3D Human Pose Estimation
SPRINGER INTERNATIONAL PUBLISHING AG. 2016: 160-177
View details for DOI 10.1007/978-3-319-46448-0_10
View details for Web of Science ID 000389382700010
-
Visual Relationship Detection with Language Priors
SPRINGER INTERNATIONAL PUBLISHING AG. 2016: 852-869
View details for DOI 10.1007/978-3-319-46448-0_51
View details for Web of Science ID 000389382700051
-
Perceptual Losses for Real-Time Style Transfer and Super-Resolution
SPRINGER INTERNATIONAL PUBLISHING AG. 2016: 694-711
View details for DOI 10.1007/978-3-319-46475-6_43
View details for Web of Science ID 000389383900043
-
The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition
SPRINGER INTERNATIONAL PUBLISHING AG. 2016: 301-320
View details for DOI 10.1007/978-3-319-46487-9_19
View details for Web of Science ID 000389384800019
-
Embracing Error to Enable Rapid Crowdsourcing
ASSOC COMPUTING MACHINERY. 2016: 3167-3179
View details for DOI 10.1145/2858036.2858115
View details for Web of Science ID 000380532903017
-
Social LSTM: Human Trajectory Prediction in Crowded Spaces
IEEE. 2016: 961-971
View details for DOI 10.1109/CVPR.2016.110
View details for Web of Science ID 000400012301002
-
Recurrent Attention Models for Depth-Based Person Identification
IEEE. 2016: 1229-1238
View details for DOI 10.1109/CVPR.2016.138
View details for Web of Science ID 000400012301030
-
Detecting events and key actors in multi-person videos
IEEE. 2016: 3043-3053
View details for DOI 10.1109/CVPR.2016.332
View details for Web of Science ID 000400012303012
-
What's the Point: Semantic Segmentation with Point Supervision
SPRINGER INTERNATIONAL PUBLISHING AG. 2016: 549–65
View details for DOI 10.1007/978-3-319-46478-7_34
View details for Web of Science ID 000389500100034
-
Visual7W: Grounded Question Answering in Images
IEEE. 2016: 4995–5004
View details for DOI 10.1109/CVPR.2016.540
View details for Web of Science ID 000400012305008
-
DenseCap: Fully Convolutional Localization Networks for Dense Captioning
IEEE. 2016: 4565–74
View details for DOI 10.1109/CVPR.2016.494
View details for Web of Science ID 000400012304068
-
Connectionist Temporal Modeling for Weakly Supervised Action Labeling
SPRINGER INTERNATIONAL PUBLISHING AG. 2016: 137–53
View details for DOI 10.1007/978-3-319-46493-0_9
View details for Web of Science ID 000389385100009
-
Visual Scenes Are Categorized by Function
JOURNAL OF EXPERIMENTAL PSYCHOLOGY-GENERAL
2016; 145 (1): 82-94
Abstract
How do we know that a kitchen is a kitchen by looking? Traditional models posit that scene categorization is achieved through recognizing necessary and sufficient features and objects, yet there is little consensus about what these may be. However, scene categories should reflect how we use visual information. Therefore, we test the hypothesis that scene categories reflect functions, or the possibilities for actions within a scene. Our approach is to compare human categorization patterns with predictions made by both functions and alternative models. We collected a large-scale scene category distance matrix (5 million trials) by asking observers to simply decide whether 2 images were from the same or different categories. Using the actions from the American Time Use Survey, we mapped actions onto each scene (1.4 million trials). We found a strong relationship between ranked category distance and functional distance (r = .50, or 66% of the maximum possible correlation). The function model outperformed alternative models of object-based distance (r = .33), visual features from a convolutional neural network (r = .39), lexical distance (r = .27), and models of visual features. Using hierarchical linear regression, we found that functions captured 85.5% of overall explained variance, with nearly half of the explained variance captured only by functions, implying that the predictive power of alternative models was because of their shared variance with the function-based model. These results challenge the dominant school of thought that visual features and objects are sufficient for scene categorization, suggesting instead that a scene's category may be determined by the scene's function.
View details for DOI 10.1037/xge0000129
View details for Web of Science ID 000367448400006
View details for PubMedCentralID PMC4693295
-
ImageNet Large Scale Visual Recognition Challenge
INTERNATIONAL JOURNAL OF COMPUTER VISION
2015; 115 (3): 211-252
View details for DOI 10.1007/s11263-015-0816-y
View details for Web of Science ID 000365089800001
-
Basic Level Category Structure Emerges Gradually across Human Ventral Visual Cortex
JOURNAL OF COGNITIVE NEUROSCIENCE
2015; 27 (7): 1427-1446
Abstract
Objects can be simultaneously categorized at multiple levels of specificity ranging from very broad ("natural object") to very distinct ("Mr. Woof"), with a mid-level of generality (basic level: "dog") often providing the most cognitively useful distinction between categories. It is unknown, however, how this hierarchical representation is achieved in the brain. Using multivoxel pattern analyses, we examined how well each taxonomic level (superordinate, basic, and subordinate) of real-world object categories is represented across occipitotemporal cortex. We found that, although in early visual cortex objects are best represented at the subordinate level (an effect mostly driven by low-level feature overlap between objects in the same category), this advantage diminishes compared to the basic level as we move up the visual hierarchy, disappearing in object-selective regions of occipitotemporal cortex. This pattern stems from a combined increase in within-category similarity (category cohesion) and between-category dissimilarity (category distinctiveness) of neural activity patterns at the basic level, relative to both subordinate and superordinate levels, suggesting that successive visual areas may be optimizing basic level representations.
View details for DOI 10.1162/jocn_a_00790
View details for Web of Science ID 000355418000014
-
What you see is what you expect: rapid scene understanding benefits from prior experience
ATTENTION PERCEPTION & PSYCHOPHYSICS
2015; 77 (4): 1239-1251
Abstract
Although we are able to rapidly understand novel scene images, little is known about the mechanisms that support this ability. Theories of optimal coding assert that prior visual experience can be used to ease the computational burden of visual processing. A consequence of this idea is that more probable visual inputs should be facilitated relative to more unlikely stimuli. In three experiments, we compared the perceptions of highly improbable real-world scenes (e.g., an underwater press conference) with common images matched for visual and semantic features. Although the two groups of images could not be distinguished by their low-level visual features, we found profound deficits related to the improbable images: Observers wrote poorer descriptions of these images (Exp. 1), had difficulties classifying the images as unusual (Exp. 2), and even had lower sensitivity to detect these images in noise than to detect their more probable counterparts (Exp. 3). Taken together, these results place a limit on our abilities for rapid scene perception and suggest that perception is facilitated by prior visual experience.
View details for DOI 10.3758/s13414-015-0859-8
View details for Web of Science ID 000353819500018
View details for PubMedID 25776799
-
Parcellating connectivity in spatial maps.
PeerJ
2015; 3
Abstract
A common goal in biological sciences is to model a complex web of connections using a small number of interacting units. We present a general approach for dividing up elements in a spatial map based on their connectivity properties, allowing for the discovery of local regions underlying large-scale connectivity matrices. Our method is specifically designed to respect spatial layout and identify locally-connected clusters, corresponding to plausible coherent units such as strings of adjacent DNA base pairs, subregions of the brain, animal communities, or geographic ecosystems. Instead of using approximate greedy clustering, our nonparametric Bayesian model infers a precise parcellation using collapsed Gibbs sampling. We utilize an infinite clustering prior that intrinsically incorporates spatial constraints, allowing the model to search directly in the space of spatially-coherent parcellations. After showing results on synthetic datasets, we apply our method to both functional and structural connectivity data from the human brain. We find that our parcellation is substantially more effective than previous approaches at summarizing the brain's connectivity structure using a small number of clusters, produces better generalization to individual subject data, and reveals functional parcels related to known retinotopic maps in visual cortex. Additionally, we demonstrate the generality of our method by applying the same model to human migration data within the United States. This analysis reveals that migration behavior is generally influenced by state borders, but also identifies regional communities which cut across state lines. Our parcellation approach has a wide range of potential applications in understanding the spatial structure of complex biological networks.
View details for DOI 10.7717/peerj.784
View details for PubMedID 25737822
View details for PubMedCentralID PMC4338796
-
Image Retrieval using Scene Graphs
IEEE. 2015: 3668–78
View details for Web of Science ID 000387959203075
-
Best of both worlds: human-machine collaboration for object annotation
IEEE. 2015: 2121-2131
View details for Web of Science ID 000387959202016
-
Deep Visual-Semantic Alignments for Generating Image Descriptions
IEEE. 2015: 3128-3137
View details for Web of Science ID 000387959203017
-
Learning semantic relationships for better action retrieval in images
IEEE. 2015: 1100-1109
View details for Web of Science ID 000387959201013
-
Fine-Grained Recognition without Part Annotations
IEEE. 2015: 5546-5555
View details for Web of Science ID 000387959205065
-
Improving Image Classification with Location Context
IEEE. 2015: 1008-1016
View details for DOI 10.1109/ICCV.2015.121
View details for Web of Science ID 000380414100113
-
Learning Temporal Embeddings for Complex Video Analysis
IEEE. 2015: 4471-4479
View details for DOI 10.1109/ICCV.2015.508
View details for Web of Science ID 000380414100500
-
RGB-W: When Vision Meets Wireless
IEEE. 2015: 3289–97
View details for DOI 10.1109/ICCV.2015.376
View details for Web of Science ID 000380414100368
-
Love Thy Neighbors: Image Annotation by Exploiting Image Metadata
IEEE International Conference on Computer Vision (ICCV)
2015
View details for DOI 10.1109/ICCV.2015.525
-
Object Bank: An Object-Level Image Representation for High-Level Visual Recognition
INTERNATIONAL JOURNAL OF COMPUTER VISION
2014; 107 (1): 20-39
View details for DOI 10.1007/s11263-013-0660-x
View details for Web of Science ID 000331640500002
-
Visual categorization is automatic and obligatory: evidence from Stroop-like paradigm.
Journal of vision
2014; 14 (1)
Abstract
Human observers categorize visual stimuli with remarkable efficiency--a result that has led to the suggestion that object and scene categorization may be automatic processes. We tested this hypothesis by presenting observers with a modified Stroop paradigm in which object or scene words were presented over images of objects or scenes. Terms were either congruent or incongruent with the images. Observers classified the words as being object or scene terms while ignoring images. Classifying a word on an incongruent image came at a cost for both objects and scenes. Furthermore, automatic processing was observed for entry-level scene categories, but not superordinate-level categories, suggesting that not all rapid categorizations are automatic. Taken together, we have demonstrated that entry-level visual categorization is an automatic and obligatory process.
View details for DOI 10.1167/14.1.14
View details for PubMedID 24434626
-
Reasoning about Object Affordances in a Knowledge Base Representation
13th European Conference on Computer Vision (ECCV)
SPRINGER-VERLAG BERLIN. 2014: 408–424
View details for Web of Science ID 000345328300027
-
Efficient Image and Video Co-localization with Frank-Wolfe Algorithm
13th European Conference on Computer Vision (ECCV)
SPRINGER-VERLAG BERLIN. 2014: 253–268
View details for Web of Science ID 000345300000017
-
Linking People in Videos with "Their" Names Using Coreference Resolution
13th European Conference on Computer Vision (ECCV)
SPRINGER INT PUBLISHING AG. 2014: 95–110
View details for Web of Science ID 000345524200007
-
Learning Features and Parts for Fine-Grained Recognition
IEEE COMPUTER SOC. 2014: 26–33
View details for DOI 10.1109/ICPR.2014.15
View details for Web of Science ID 000359818000004
-
Co-localization in Real-World Images
IEEE. 2014: 1464–71
View details for DOI 10.1109/CVPR.2014.190
View details for Web of Science ID 000361555601065
-
Understanding the 3D Layout of a Cluttered Room From Multiple Images
IEEE. 2014: 690-697
View details for Web of Science ID 000356144800094
-
Socially-aware Large-scale Crowd Forecasting
IEEE. 2014: 2211-2218
View details for DOI 10.1109/CVPR.2014.283
View details for Web of Science ID 000361555602033
-
Crowdsourcing in Computer Vision
FOUNDATIONS AND TRENDS IN COMPUTER GRAPHICS AND VISION
2014; 10 (3): I-243
View details for DOI 10.1561/0600000071
View details for Web of Science ID 000219963400001
-
Deep Fragment Embeddings for Bidirectional Image Sentence Mapping
NEURAL INFORMATION PROCESSING SYSTEMS (NIPS). 2014
View details for Web of Science ID 000452647100060
-
Large-scale Video Classification with Convolutional Neural Networks
IEEE. 2014: 1725–32
View details for DOI 10.1109/CVPR.2014.223
View details for Web of Science ID 000361555601098
-
Differential connectivity within the Parahippocampal Place Area
NEUROIMAGE
2013; 75: 228-237
Abstract
The Parahippocampal Place Area (PPA) has traditionally been considered a homogeneous region of interest, but recent evidence from both human studies and animal models has suggested that PPA may be composed of functionally distinct subunits. To investigate this hypothesis, we utilize a functional connectivity measure for fMRI that can estimate connectivity differences at the voxel level. Applying this method to whole-brain data from two experiments, we provide the first direct evidence that anterior and posterior PPA exhibit distinct connectivity patterns, with anterior PPA more strongly connected to regions in the default mode network (including the parieto-medial temporal pathway) and posterior PPA more strongly connected to occipital visual regions. We show that object sensitivity in PPA also has an anterior-posterior gradient, with stronger responses to abstract objects in posterior PPA. These findings cast doubt on the traditional view of PPA as a single coherent region, and suggest that PPA is composed of one subregion specialized for the processing of low-level visual features and object shape, and a separate subregion more involved in memory and scene context.
View details for DOI 10.1016/j.neuroimage.2013.02.073
View details for Web of Science ID 000318208000024
View details for PubMedID 23507385
View details for PubMedCentralID PMC3683120
-
Biodistribution, pharmacokinetics and toxicology of Ag2S near-infrared quantum dots in mice
BIOMATERIALS
2013; 34 (14): 3639-3646
Abstract
Ag2S quantum dots (QDs) have been demonstrated as a promising near-infrared II (NIR-II, 1.0-1.4 μm) emitting nanoprobe for in vivo imaging and detection. In this work, we carefully study the long-term in vivo biodistribution of Ag2S QDs functionalized with polyethylene glycol (PEG) and systematically examine the potential toxicity of Ag2S QDs over time. Our results show that PEGylated-Ag2S QDs are mainly accumulated in the reticuloendothelial system (RES) including liver and spleen after intravenous administration and can be gradually cleared, mostly by fecal excretion. PEGylated-Ag2S QDs do not cause appreciable toxicity at our tested doses (15 and 30 mg/kg) to the treated mice over a period of 2 months as evidenced by blood biochemistry, hematological analysis and histological examinations. Our work lays a solid foundation for further biomedical applications of Ag2S QDs as an important in vivo imaging agent in the NIR-II region.
View details for DOI 10.1016/j.biomaterials.2013.01.089
View details for Web of Science ID 000317534200010
View details for PubMedID 23415643
-
Comprehensive next-generation sequence analyses of the entire mitochondrial genome reveal new insights into the molecular diagnosis of mitochondrial DNA disorders
GENETICS IN MEDICINE
2013; 15 (5): 388-394
Abstract
Purpose:The application of massively parallel sequencing technology to the analysis of the mitochondrial genome has demonstrated great improvement in the molecular diagnosis of mitochondrial DNA-related disorders. The objective of this study was to investigate the performance characteristics and to gain new insights into the analysis of the mitochondrial genome.Methods:The entire mitochondrial genome was analyzed as a single amplicon using a long-range PCR-based enrichment approach coupled with massively parallel sequencing. The interference of the nuclear mitochondrial DNA homologs was distinguished from the actual mitochondrial DNA sequences by comparison with the results obtained from conventional PCR-based Sanger sequencing using multiple pairs of primers.Results:Our results demonstrated the uniform coverage of the entire mitochondrial genome. Massively parallel sequencing of the single amplicon revealed the presence of single-nucleotide polymorphisms and nuclear homologs of mtDNA sequences that cause the erroneous and inaccurate variant calls when PCR/Sanger sequencing approach was used. This single amplicon massively parallel sequencing strategy provides an accurate quantification of mutation heteroplasmy as well as the detection and mapping of mitochondrial DNA deletions.Conclusion:The ability to quantitatively and qualitatively evaluate every single base of the entire mitochondrial genome is indispensible to the accurate molecular diagnosis and genetic counseling of mitochondrial DNA-related disorders. This new approach may be considered as first-line testing for comprehensive analysis of the mitochondrial genome.Genet Med 2013:15(5):388-394.
View details for DOI 10.1038/gim.2012.144
View details for Web of Science ID 000318888600011
View details for PubMedID 23288206
-
DIFFERENTIAL CONNECTIVITY WITHIN THE PARAHIPPOCAMPAL PLACE AREA
20th Annual Meeting of the Cognitive-Neuroscience-Society
MIT PRESS. 2013: 146–146
View details for Web of Science ID 000317030500588
-
Video Event Understanding using Natural Language Descriptions
IEEE International Conference on Computer Vision (ICCV)
IEEE. 2013: 905–912
View details for DOI 10.1109/ICCV.2013.117
View details for Web of Science ID 000351830500113
-
Detecting avocados to zucchinis: what have we done, and where are we going?
IEEE International Conference on Computer Vision (ICCV)
IEEE. 2013: 2064–2071
View details for DOI 10.1109/ICCV.2013.258
View details for Web of Science ID 000351830500258
-
3D Object Representations for Fine-Grained Categorization
IEEE International Conference on Computer Vision Workshops (ICCVW)
IEEE. 2013: 554–561
View details for DOI 10.1109/ICCVW.2013.77
View details for Web of Science ID 000349847200075
-
Object Discovery in 3D scenes via Shape Analysis
IEEE International Conference on Robotics and Automation (ICRA)
IEEE. 2013: 2088–2095
View details for Web of Science ID 000337617302015
-
Combining the Right Features for Complex Event Recognition
IEEE. 2013: 2696-2703
View details for DOI 10.1109/ICCV.2013.335
View details for Web of Science ID 000351830500337
-
Discovering Object Functionality
IEEE. 2013: 2512-2519
View details for DOI 10.1109/ICCV.2013.312
View details for Web of Science ID 000351830500314
-
Fine-Grained Crowdsourcing for Fine-Grained Recognition
26th IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
IEEE. 2013: 580–587
View details for DOI 10.1109/CVPR.2013.81
View details for Web of Science ID 000331094300074
-
Social Role Discovery in Human Events
26th IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
IEEE. 2013: 2475–2482
View details for DOI 10.1109/CVPR.2013.320
View details for Web of Science ID 000331094302068
-
Discriminative Segment Annotation in Weakly Labeled Video
26th IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
IEEE. 2013: 2483–2490
View details for DOI 10.1109/CVPR.2013.321
View details for Web of Science ID 000331094302069
- Object Discovery in 3D Scenes via Shape Analysis 2013
- Combining the Right Features for Complex Event Recognition 2013
- Discovering Object Functionality 2013
- Discriminative Segment Annotation in Weakly Labeled Video 2013
- Detecting avocados to zucchinis: what have we done, and where are we going? 2013
- Video Event Understanding using Natural Language Descriptions 2013
- Social Role Discovery in Human Events 2013
- Object Bank: An Object-Level Image Representation for High-Level Visual Recognition 2013
- Differential Connectivity Within the Parahippocampal Place Area NeuroImage 2013
- Good Exemplars of Natural Scene Categories Elicit Clearer Patterns than Bad Exemplars but not Greater BOLD Activity PLoS ONE. 2013
- Fine-Grained Crowdsourcing for Fine-Grained Recognition 2013
-
NATURAL STIMULI ACQUIRE BASIC-LEVEL ADVANTAGE IN OBJECT-SELECTIVE CORTEX
20th Annual Meeting of the Cognitive-Neuroscience-Society
MIT PRESS. 2013: 205–206
View details for Web of Science ID 000317030501156
-
INTERNAL REPRESENTATIONS OF REAL-WORLD SCENE CATEGORIES
20th Annual Meeting of the Cognitive-Neuroscience-Society
MIT PRESS. 2013: 205–205
View details for Web of Science ID 000317030501155
-
Voxel-level functional connectivity using spatial regularization
NEUROIMAGE
2012; 63 (3): 1099-1106
Abstract
Discovering functional connectivity between and within brain regions is a key concern in neuroscience. Due to the noise inherent in fMRI data, it is challenging to characterize the properties of individual voxels, and current methods are unable to flexibly analyze voxel-level connectivity differences. We propose a new functional connectivity method which incorporates a spatial smoothness constraint using regularized optimization, enabling the discovery of voxel-level interactions between brain regions from the small datasets characteristic of fMRI experiments. We validate our method in two separate experiments, demonstrating that we can learn coherent connectivity maps that are consistent with known results. First, we examine the functional connectivity between early visual areas V1 and VP, confirming that this connectivity structure preserves retinotopic mapping. Then, we show that two category-selective regions in ventral cortex - the Parahippocampal Place Area (PPA) and the Fusiform Face Area (FFA) - exhibit an expected peripheral versus foveal bias in their connectivity with visual area hV4. These results show that our approach is powerful, widely applicable, and capable of uncovering complex connectivity patterns with only a small amount of input data.
View details for DOI 10.1016/j.neuroimage.2012.07.046
View details for Web of Science ID 000310379100011
View details for PubMedID 22846660
View details for PubMedCentralID PMC3592577
-
Impact of restricted marital practices on genetic variation in an endogamous Gujarati group
AMERICAN JOURNAL OF PHYSICAL ANTHROPOLOGY
2012; 149 (1): 92-103
Abstract
Recent studies have examined the influence on patterns of human genetic variation of a variety of cultural practices. In India, centuries-old marriage customs have introduced extensive social structuring into the contemporary population, potentially with significant consequences for genetic variation. Social stratification in India is evident as social classes that are defined by endogamous groups known as castes. Within a caste, there exist endogamous groups known as gols (marriage circles), each of which comprises a small number of exogamous gotra (lineages). Thus, while consanguinity is strictly avoided and some randomness in mate selection occurs within the gol, gene flow is limited with groups outside the gol. Gujarati Patels practice this form of "exogamic endogamy." We have analyzed genetic variation in one such group of Gujarati Patels, the Chha Gaam Patels (CGP), who comprise individuals from six villages. Population structure analysis of 1,200 autosomal loci offers support for the existence of distinctive multilocus genotypes in the CGP with respect to both non-Gujaratis and other Gujaratis, and indicates that CGP individuals are genetically very similar. Analysis of Y-chromosomal and mitochondrial haplotypes provides support for both patrilocal and patrilineal practices within the gol, and a low-level of female gene flow into the gol. Our study illustrates how the practice of gol endogamy has introduced fine-scale genetic structure into the population of India, and contributes more generally to an understanding of the way in which marriage practices affect patterns of genetic variation.
View details for DOI 10.1002/ajpa.22101
View details for Web of Science ID 000307729300009
View details for PubMedID 22729696
-
Recognizing Human-Object Interactions in Still Images by Modeling the Mutual Context of Objects and Human Poses
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
2012; 34 (9): 1691-1703
Abstract
Detecting objects in cluttered scenes and estimating articulated human body parts from 2D images are two challenging problems in computer vision. The difficulty is particularly pronounced in activities involving human-object interactions (e.g., playing tennis), where the relevant objects tend to be small or only partially visible and the human body parts are often self-occluded. We observe, however, that objects and human poses can serve as mutual context to each other-recognizing one facilitates the recognition of the other. In this paper, we propose a mutual context model to jointly model objects and human poses in human-object interaction activities. In our approach, object detection provides a strong prior for better human pose estimation, while human pose estimation improves the accuracy of detecting the objects that interact with the human. On a six-class sports data set and a 24-class people interacting with musical instruments data set, we show that our mutual context model outperforms state of the art in detecting very difficult objects and estimating human poses, as well as classifying human-object interaction activities.
View details for DOI 10.1109/TPAMI.2012.67
View details for Web of Science ID 000306409100004
View details for PubMedID 22392710
-
Ag2S Quantum Dot: A Bright and Biocompatible Fluorescent Nanoprobe in the Second Near-Infrared Window
ACS NANO
2012; 6 (5): 3695-3702
Abstract
Ag(2)S quantum dots (QDs) emitting in the second near-infrared region (NIR-II, 1.0-1.4 μm) are demonstrated as a promising fluorescent probe with both bright photoluminescence and high biocompatibility for the first time. Highly selective in vitro targeting and imaging of different cell lines are achieved using biocompatible NIR-II Ag(2)S QDs with different targeting ligands. The cytotoxicity study illustrates the Ag(2)S QDs with negligible effects in altering cell proliferation, triggering apoptosis and necrosis, generating reactive oxygen species, and causing DNA damage. Our results have opened up the possibilities of using these biocompatible Ag(2)S QDs for in vivo anatomical imaging and early stage tumor diagnosis with deep tissue penetration, high sensitivity, and elevated spatial and temporal resolution owing to their high emission efficiency in the unique NIR-II imaging window.
View details for DOI 10.1021/nn301218z
View details for Web of Science ID 000304231700007
View details for PubMedID 22515909
View details for PubMedCentralID PMC3358570
-
Guidelines for the use and interpretation of assays for monitoring autophagy
AUTOPHAGY
2012; 8 (4): 445-544
Abstract
In 2008 we published the first set of guidelines for standardizing research in autophagy. Since then, research on this topic has continued to accelerate, and many new scientists have entered the field. Our knowledge base and relevant new technologies have also been expanding. Accordingly, it is important to update these guidelines for monitoring autophagy in different organisms. Various reviews have described the range of assays that have been used for this purpose. Nevertheless, there continues to be confusion regarding acceptable methods to measure autophagy, especially in multicellular eukaryotes. A key point that needs to be emphasized is that there is a difference between measurements that monitor the numbers or volume of autophagic elements (e.g., autophagosomes or autolysosomes) at any stage of the autophagic process vs. those that measure flux through the autophagy pathway (i.e., the complete process); thus, a block in macroautophagy that results in autophagosome accumulation needs to be differentiated from stimuli that result in increased autophagic activity, defined as increased autophagy induction coupled with increased delivery to, and degradation within, lysosomes (in most higher eukaryotes and some protists such as Dictyostelium) or the vacuole (in plants and fungi). In other words, it is especially important that investigators new to the field understand that the appearance of more autophagosomes does not necessarily equate with more autophagy. In fact, in many cases, autophagosomes accumulate because of a block in trafficking to lysosomes without a concomitant change in autophagosome biogenesis, whereas an increase in autolysosomes may reflect a reduction in degradative activity. Here, we present a set of guidelines for the selection and interpretation of methods for use by investigators who aim to examine macroautophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes. These guidelines are not meant to be a formulaic set of rules, because the appropriate assays depend in part on the question being asked and the system being used. In addition, we emphasize that no individual assay is guaranteed to be the most appropriate one in every situation, and we strongly recommend the use of multiple assays to monitor autophagy. In these guidelines, we consider these various methods of assessing autophagy and what information can, or cannot, be obtained from them. Finally, by discussing the merits and limits of particular autophagy assays, we hope to encourage technical innovation in the field.
View details for DOI 10.4161/auto.19496
View details for Web of Science ID 000305403400002
View details for PubMedID 22966490
View details for PubMedCentralID PMC3404883
-
The Integrative Effects of Cognitive Reappraisal on Negative Affect: Associated Changes in Secretory Immunoglobulin A, Unpleasantness and ERP Activity
PLOS ONE
2012; 7 (2)
Abstract
Although the regulatory role of cognitive reappraisal in negative emotional responses is widely recognized, this reappraisal's effect on acute saliva secretory immunoglobulin A (SIgA), as well as the relationships among affective, immunological, and event-related potential (ERP) changes, remains unclear. In this study, we selected only people with low positive coping scores (PCSs) as measured by the Trait Coping Style Questionnaire to avoid confounding by intrinsic coping styles. First, we found that the acute stress of viewing unpleasant pictures consistently decreased SIgA concentration and secretion rate, increased perceptions of unpleasantness and amplitude of late positive potentials (LPPs) between 200-300 ms and 400-1000 ms. After participants used cognitive reappraisal, their SIgA concentration and secretion rate significantly increased and their unpleasantness and LPP amplitudes significantly decreased compared with a control condition. Second, we found a significantly positive correlation between the increases in SIgA and the decreases in unpleasantness and a significantly negative correlation between the increases in SIgA and the increases in LPP across the two groups. This study is the first to demonstrate that cognitive reappraisal reverses the decrease of SIgA. In addition, it revealed strong correlations among affective, SIgA and electrophysiological changes with convergent multilevel evidence.
View details for DOI 10.1371/journal.pone.0030761
View details for Web of Science ID 000301979000013
View details for PubMedID 22319586
View details for PubMedCentralID PMC3271092
-
A Codebook-Free and Annotation-Free Approach for Fine-Grained Image Categorization
IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
IEEE. 2012: 3466–3473
View details for Web of Science ID 000309166203080
-
Object-Centric Spatial Pooling for Image Classification
12th European Conference on Computer Vision (ECCV)
SPRINGER-VERLAG BERLIN. 2012: 1–15
View details for Web of Science ID 000343415400001
-
To Err Is Human: Correlating fMRI Decoding and Behavioral Errors to Probe the Neural Representation of Natural Scene Categories
VISUAL POPULATION CODES: TOWARD A COMMON MULTIVARIATE FRAMEWORK FOR CELL RECORDING AND FUNCTIONAL IMAGING
2012: 391-415
View details for Web of Science ID 000299078100017
-
Action Recognition with Exemplar Based 2.5D Graph Matching
SPRINGER-VERLAG BERLIN. 2012: 173-186
View details for Web of Science ID 000342818800013
- Web Image Prediction Using Multivariate Point Processes 2012
- Shifting Weights: Adapting Object Detectors from Image to Video 2012
- Voxel-Level Functional Connectivity using Spatial Regularization NeuroImage 2012
- Efficient Euclidean Projections onto the Intersection of Norm Balls 2012
- Crowdsourcing Annotations for Visual Object Detection 2012
- Action Recognition with Exemplar Based 2.5D Graph Matching 2012
- Object-centric spatial pooling for image classification 2012
- Recognizing Human Actions in Still Images by Modeling the Mutual Context of Objects and Human Poses 2012
-
Learning Latent Temporal Structure for Complex Event Detection
IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
IEEE. 2012: 1250–1257
View details for Web of Science ID 000309166201051
-
Hedging Your Bets: Optimizing Accuracy-Specificity Trade-offs in Large Scale Visual Recognition
IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
IEEE. 2012: 3450–3457
View details for Web of Science ID 000309166203078
-
Automatic basic-level object and scene categorization
PSYCHOLOGY PRESS. 2012: 1028–31
View details for DOI 10.1080/13506285.2012.726470
View details for Web of Science ID 000310949900008
-
Extended Graphical Model for Analysis of Dynamic Contrast-Enhanced MRI
MAGNETIC RESONANCE IN MEDICINE
2011; 66 (3): 868-878
Abstract
Kinetic analysis with mathematical models has become increasingly important to quantify physiological parameters in computed tomography (CT), positron emission tomography (PET), and dynamic contrast-enhanced MRI (DCE-MRI). The modified Kety/Tofts model and the graphical (Patlak) model have been widely applied to DCE-MRI results in disease processes such as cancer, inflammation, and ischemia. In this article, an intermediate model between the modified Kety/Tofts and Patlak models is derived from a mathematical expansion of the modified Kety/Tofts model. Simulations and an in vivo experiment involving DCE-MRI of carotid atherosclerosis were used to compare the new extended graphical model with the modified Kety/Tofts model and the Patlak model. In our simulated circumstances and the carotid artery application, we found that the extended graphical model exhibited lower noise sensitivity and provided more accurate estimates of the volume transfer constant (K(trans)) and fractional plasma volume (v(p)) than the modified Kety/Tofts model for DCE-MRI acquisitions of total duration less than 100-300 s, depending on kinetic parameters. In comparison with the Patlak model, we found that the extended graphical model exhibited 74.4-99.8% less bias in estimates of K(trans). Thus, the extended graphical model may allow kinetic modeling of DCE-MRI results with shortened data acquisition periods, without sacrificing accuracy in estimates of K(trans) and v(p).
View details for DOI 10.1002/mrm.22819
View details for Web of Science ID 000293988000028
View details for PubMedID 21394770
-
Combining Randomization and Discrimination for Fine-Grained Image Categorization
IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
IEEE. 2011: 1577–1584
View details for Web of Science ID 000295615801084
-
Hierarchical Semantic Indexing for Large Scale Image Retrieval
IEEE. 2011: 785-792
View details for Web of Science ID 000295615800098
-
Distributed Cosegmentation via Submodular Optimization on Anisotropic Diffusion
IEEE. 2011: 169-176
View details for Web of Science ID 000300061900022
- Classifying Actions and Measuring Action Similarity by Modeling the Mutual Context of Objects and Human Poses 2011
- Distributed cosegmentation vis submodular optimization on anisotropic diffusion 2011
- Large-Scale Category Structure Aware Image Categorization 2011
- Online Detection of Unusual Events in Videos via Dynamic Sparse Coding 2011
- Hierarchical Semantic Indexing for Large Scale Image Retrieval 2011
- Simple line drawings suffice for functional MRI decoding of natural scene categories 2011
- ReV ReVision: Automated Classification, Analysis and Redesign of Chart Imagesision: Automated Classification, Analysis and Redesign of Chart Images 2011
- Fast and Balanced: Efficient Label Tree Learning for Large Scale Object Recognition 2011
-
Human Action Recognition by Learning Bases of Action Attributes and Parts
IEEE International Conference on Computer Vision (ICCV)
IEEE. 2011: 1331–1338
View details for Web of Science ID 000300061900169
-
MicroRNA-210 as a Novel Therapy for Treatment of Ischemic Heart Disease
82nd National Conference and Exhibitions and Scientific Sessions of the American-Heart-Association
LIPPINCOTT WILLIAMS & WILKINS. 2010: S124–S131
Abstract
MicroRNAs are involved in various critical functions, including the regulation of cellular differentiation, proliferation, angiogenesis, and apoptosis. We hypothesize that microRNA-210 can rescue cardiac function after myocardial infarction by upregulation of angiogenesis and inhibition of cellular apoptosis in the heart.Using microRNA microarrays, we first showed that microRNA-210 was highly expressed in live mouse HL-1 cardiomyocytes compared with apoptotic cells after 48 hours of hypoxia exposure. We confirmed by polymerase chain reaction that microRNA-210 was robustly induced in these cells. Gain-of-function and loss-of-function approaches were used to investigate microRNA-210 therapeutic potential in vitro. After transduction, microRNA-210 can upregulate several angiogenic factors, inhibit caspase activity, and prevent cell apoptosis compared with control. Afterward, adult FVB mice underwent intramyocardial injections with minicircle vector carrying microRNA-210 precursor, minicircle carrying microRNA-scramble, or sham surgery. At 8 weeks, echocardiography showed a significant improvement of left ventricular fractional shortening in the minicircle vector carrying microRNA-210 precursor group compared with the minicircle carrying microRNA-scramble control. Histological analysis confirmed decreased cellular apoptosis and increased neovascularization. Finally, 2 potential targets of microRNA-210, Efna3 and Ptp1b, involved in angiogenesis and apoptosis were confirmed through additional experimental validation.MicroRNA-210 can improve angiogenesis, inhibit apoptosis, and improve cardiac function in a murine model of myocardial infarction. It represents a potential novel therapeutic approach for treatment of ischemic heart disease.
View details for DOI 10.1161/CIRCULATIONAHA.109.928424
View details for Web of Science ID 000282294800019
View details for PubMedID 20837903
View details for PubMedCentralID PMC2952325
-
Bayesian Variable Selection in Structured High-Dimensional Covariate Spaces With Applications in Genomics
JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION
2010; 105 (491): 1202-1214
View details for DOI 10.1198/jasa.2010.tm08177
View details for Web of Science ID 000283695300034
-
Learning Object Categories From Internet Image Searches
PROCEEDINGS OF THE IEEE
2010; 98 (8): 1453–66
View details for DOI 10.1109/JPROC.2010.2048990
View details for Web of Science ID 000283413400008
-
A PKC-beta inhibitor treatment reverses cardiac microvascular barrier dysfunction in diabetic rats
MICROVASCULAR RESEARCH
2010; 80 (1): 158-165
Abstract
The PKC-beta inhibitor ruboxistaurin (RBX or LY333531) prevents diabetic renal and retinal microvascular complications. However, the effect of RBX on diabetic cardiac microvascular dysfunction is still unclear. In this study, we aimed to investigate the effects and mechanisms of RBX treatment upon cardiac endothelial barrier dysfunction in high glucose states. We demonstrated RBX treatment suppressed high glucose induced PKC-betaII activation and phosphorylation of beta-catenin in vivo and in vitro experiments. Meanwhile, RBX treatment protected cardiac microvascular barrier function in diabetic animals and monolayer barrier function of cultured cardiac microvascular endothelial cells (CMECs), reproducing the same effect as PKC-betaII siRNA. These results provide new insight into protective properties of PKC-beta inhibitor against cardiac endothelial barrier dysfunction. PKC-beta inhibitor RBX prevented chronic cardiac microvascular barrier dysfunction and improved endothelial cell-cell junctional function in high glucose states.
View details for DOI 10.1016/j.mvr.2010.01.003
View details for Web of Science ID 000278950700022
View details for PubMedID 20079359
-
OPTIMOL: Automatic Online Picture Collection via Incremental Model Learning
INTERNATIONAL JOURNAL OF COMPUTER VISION
2010; 88 (2): 147-168
View details for DOI 10.1007/s11263-009-0265-6
View details for Web of Science ID 000275955400002
-
Modeling Mutual Context of Object and Human Pose in Human-Object Interaction Activities
23rd IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
IEEE COMPUTER SOC. 2010: 17–24
View details for Web of Science ID 000287417500003
-
What, Where and Who? Telling the Story of an Image by Activity Classification, Scene Recognition and Object Categorization
COMPUTER VISION: DETECTION, RECOGNITION AND RECONSTRUCTION
2010; 285: 157–71
View details for Web of Science ID 000277810300006
-
Efficient Extraction of Human Motion Volumes by Tracking
IEEE COMPUTER SOC. 2010: 655-662
View details for DOI 10.1109/CVPR.2010.5540152
View details for Web of Science ID 000287417500084
-
What Does Classifying More Than 10,000 Image Categories Tell Us?
SPRINGER-VERLAG BERLIN. 2010: 71-+
View details for Web of Science ID 000286578400006
-
Multi-view Object Categorization and Pose Estimation
COMPUTER VISION: DETECTION, RECOGNITION AND RECONSTRUCTION
2010; 285: 205-231
View details for Web of Science ID 000277810300008
-
Connecting Modalities: Semi-supervised Segmentation and Annotation of Images Using Unaligned Text Corpora
23rd IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
IEEE COMPUTER SOC. 2010: 966–973
View details for Web of Science ID 000287417501002
-
Building and Using a Semantivisual Image Hierarchy
23rd IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
IEEE COMPUTER SOC. 2010: 3336–3343
View details for Web of Science ID 000287417503050
-
Image Segmentation with Topic Random Field
SPRINGER-VERLAG BERLIN. 2010: 785-+
View details for Web of Science ID 000286578400057
- To err is human: investigating neural function by correlating error patterns with human behavior MIT Press, Cambridge, Massachusetts.. 2010
- Efficient Extraction of Human Motion Volumes by Tracking 2010
- Building and Using a Semantivisual Image Hierarchy 2010
- Objects as Attributes for Scene Classification 2010
- What does classifying more than 10,000 image categories tell us? 2010
- Large Margin Learning of Upstream Scene Understanding Models 2010
- Object Bank: A High-Level Image Representation for Scene Classification and Semantic Feature Sparsification 2010
- Learning object categories from Internet image searches 2010
- Multi-view Object Categorization and Pose Estimation Studies in Computational Intelligence- Computer Vision 2010: 1
- What, Where and Who? Telling the Story of an Image by Activity Classification, Scene Recognition and Object Categorization Studies in Computational Intelligence- Computer Vision 2010: 1
- Connecting Modalities: Semi-supervised Segmentation and Annotation of Images Using Unaligned Text Corpora 2010
- Attribute learning in large-scale datasets 2010
- Image Segmentation with Topic Random Fields 2010
-
Modeling Temporal Structure of Decomposable Motion Segments for Activity Classification
11th European Conference on Computer Vision
SPRINGER-VERLAG BERLIN. 2010: 392–405
View details for Web of Science ID 000286164000029
-
Grouplet: A Structured Image Representation for Recognizing Human and Object Interactions
23rd IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
IEEE COMPUTER SOC. 2010: 9–16
View details for Web of Science ID 000287417500002
- Neural mechanisms of rapid natural scene categorization in human visual cortex Nature 2009
-
Learning a dense multi-view representation for detection, viewpoint classification and synthesis of object categories
12th IEEE International Conference on Computer Vision
IEEE. 2009: 213–220
View details for Web of Science ID 000294955300028
- Simultaneous Image Classification and Annotation 2009
- Exploring Functional Connectivity of the Human Brain using Multivariate Information Analysis 2009
- Learning a dense multi-view representation for detection, viewpoint classification and synthesis of object categories 2009
- A Multi-View Probabilistic Model for 3D Object Classes 2009
- ImageNet: A Large-Scale Hierarchical Image Database 2009
- Towards Total Scene Understanding:Classification, Annotation and Segmentation in an Automatic Framework 2009
- OPTIMOL: automatic Online Picture collecTion via Incremental MOdel Learning 2009
- Hierarchical Mixture of Classification Experts Uncovers Interactions between Brain Regions 2009
- Natural scene categories revealed in distributed patterns of activity in the human brain Journal of Neuroscience 2009
-
Therapeutic strategies for Parkinson's disease: The ancient meets the future - Traditional Chinese herbal medicine, electroacupuncture, gene therapy and stem cells
NEUROCHEMICAL RESEARCH
2008; 33 (10): 1956-1963
Abstract
In China, it has been estimated that there are more than 2.0 million people suffering from Parkinson's disease, which is currently becoming one of the most common chronic neurodegenerative disorders during recent years. For many years, scientists have struggled to find new therapeutic approaches for this disease. Since 1994, our research group led by Drs. Ji-Sheng Han and Xiao-Min Wang of Neuroscience Research Institute, Peking University has developed several prospective treatment strategies for the disease. These studies cover the traditional Chinese medicine-herbal formula or acupuncture, and modern technologies such as gene therapy or stem cell replacement therapy, and have achieved some original results. It hopes that these data may be beneficial for the research development and for the future clinical utility for treatment of Parkinson's disease.
View details for DOI 10.1007/s11064-008-9691-z
View details for Web of Science ID 000259190900008
View details for PubMedID 18404373
- View synthesis for recognizing unseen poses of object classes. 2008
- Unsupervised learning of human action categories using spatial-temporal words. 2008
- Towards scalable dataset construction: An active learning approach. 2008
- Extracting Moving People from Internet Videos. 2008
- Spatial-temporal correlations for unsupervised action classification. 2008
-
Prevalence of pathogenic BRCA1 mutation carriers in 5 US racial/ethnic groups
JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION
2007; 298 (24): 2869-2876
Abstract
Information on the prevalence of pathogenic BRCA1 mutation carriers in racial/ethnic minority populations is limited.To estimate BRCA1 carrier prevalence in Hispanic, African American, and Asian American female breast cancer patients compared with non-Hispanic white patients with and without Ashkenazi Jewish ancestry.We estimated race/ethnicity-specific prevalence of BRCA1 in a population-based, multiethnic series of female breast cancer patients younger than 65 years at diagnosis who were enrolled at the Northern California site of the Breast Cancer Family Registry during the period 1996-2005. Race/ethnicity and religious ancestry were based on self-report. Weighted estimates of prevalence and 95% confidence intervals (CIs) were based on Horvitz-Thompson estimating equations.Estimates of BRCA1 prevalence.Estimates of BRCA1 prevalence were 3.5% (95% CI, 2.1%-5.8%) in Hispanic patients (n = 393), 1.3% (95% CI, 0.6%-2.6%) in African American patients (n = 341), and 0.5% (95% CI, 0.1%-2.0%) in Asian American patients (n = 444), compared with 8.3% (95% CI, 3.1%-20.1%) in Ashkenazi Jewish patients (n = 41) and 2.2% (95% CI, 0.7%-6.9%) in other non-Hispanic white patients (n = 508). Prevalence was particularly high in young (<35 years) African American patients (5/30 patients [16.7%]; 95% CI, 7.1%-34.3%). 185delAG was the most common mutation in Hispanics, found in 5 of 21 carriers (24%).Among African American, Asian American, and Hispanic patients in the Northern California Breast Cancer Family Registry, the prevalence of BRCA1 mutation carriers was highest in Hispanics and lowest in Asian Americans. The higher carrier prevalence in Hispanics may reflect the presence of unrecognized Jewish ancestry in this population.
View details for Web of Science ID 000251816000019
View details for PubMedID 18159056
-
F-18-labeled mini-PEG spacered RGD dimer (F-18-FPRGD2): synthesis and microPET imaging of alpha(v)beta(3) integrin expression
EUROPEAN JOURNAL OF NUCLEAR MEDICINE AND MOLECULAR IMAGING
2007; 34 (11): 1823-1831
Abstract
We have previously reported that (18)F-FB-E[c(RGDyK)](2) ((18)F-FRGD2) allows quantitative PET imaging of integrin alpha(v)beta(3) expression. However, the potential clinical translation was hampered by the relatively low radiochemical yield. The goal of this study was to improve the radiolabeling yield, without compromising the tumor targeting efficiency and in vivo kinetics, by incorporating a hydrophilic bifunctional mini-PEG spacer.(18)F-FB-mini-PEG-E[c(RGDyK)](2) ((18)F-FPRGD2) was synthesized by coupling N-succinimidyl-4-(18)F-fluorobenzoate ((18)F-SFB) with NH(2)-mini-PEG-E[c(RGDyK)](2) (denoted as PRGD2). In vitro receptor binding affinity, metabolic stability, and integrin alpha(v)beta(3) specificity of the new tracer (18)F-FPRGD2 were assessed. The diagnostic value of (18)F-FPRGD2 was evaluated in subcutaneous U87MG glioblastoma xenografted mice and in c-neu transgenic mice by quantitative microPET imaging studies.The decay-corrected radiochemical yield based on (18)F-SFB was more than 60% with radiochemical purity of >99%. (18)F-FPRGD2 had high receptor binding affinity, metabolic stability, and integrin alpha(v)beta(3)-specific tumor uptake in the U87MG glioma xenograft model comparable to those of (18)F-FRGD2. The kidney uptake was appreciably lower for (18)F-FPRGD2 compared with (18)F-FRGD2 [2.0 +/- 0.2%ID/g for (18)F-FPRGD2 vs 3.0 +/- 0.2%ID/g for (18)F-FRGD2 at 1 h post injection (p.i.)]. The uptake in all the other organs except the urinary bladder was at background level. (18)F-FPRGD2 also exhibited excellent tumor uptake in c-neu oncomice (3.6 +/- 0.1%ID/g at 30 min p.i.).Incorporation of a mini-PEG spacer significantly improved the overall radiolabeling yield of (18)F-FPRGD2. (18)F-FPRGD2 also had reduced renal uptake and similar tumor targeting efficacy as compared with (18)F-FRGD2. Further testing and clinical translation of (18)F-FPRGD2 are warranted.
View details for DOI 10.1007/s00259-007-0427-0
View details for Web of Science ID 000250205400015
View details for PubMedID 17492285
-
MicroPET of tumor integrin alpha(v)beta(3) expression using F-18-Labeled PEGylated tetrameric RGD peptide (F-18-FPRGD4)
JOURNAL OF NUCLEAR MEDICINE
2007; 48 (9): 1536-1544
Abstract
In vivo imaging of alpha(v)beta(3) expression has important diagnostic and therapeutic applications. Multimeric cyclic RGD peptides are capable of improving the integrin alpha(v)beta(3)-binding affinity due to the polyvalency effect. Here we report an example of (18)F-labeled tetrameric RGD peptide for PET of alpha(v)beta(3) expression in both xenograft and spontaneous tumor models.The tetrameric RGD peptide E{E[c(RGDyK)](2)}(2) was derived with amino-3,6,9-trioxaundecanoic acid (mini-PEG; PEG is poly(ethylene glycol)) linker through the glutamate alpha-amino group. NH(2)-mini-PEG-E{E[c(RGDyK)](2)}(2) (PRGD4) was labeled with (18)F via the N-succinimidyl-4-(18)F-fluorobenzoate ((18)F-SFB) prosthetic group. The receptor-binding characteristics of the tetrameric RGD peptide tracer (18)F-FPRGD4 were evaluated in vitro by a cell-binding assay and in vivo by quantitative microPET imaging studies.The decay-corrected radiochemical yield for (18)F-FPRGD4 was about 15%, with a total reaction time of 180 min starting from (18)F-F(-). The PEGylation had minimal effect on integrin-binding affinity of the RGD peptide. (18)F-FPRGD4 has significantly higher tumor uptake compared with monomeric and dimeric RGD peptide tracer analogs. The receptor specificity of (18)F-FPRGD4 in vivo was confirmed by effective blocking of the uptake in both tumors and normal organs or tissues with excess c(RGDyK).The tetrameric RGD peptide tracer (18)F-FPRGD4 possessing high integrin-binding affinity and favorable biokinetics is a promising tracer for PET of integrin alpha(v)beta(3) expression in cancer and other angiogenesis related diseases.
View details for DOI 10.2967/jnumed.107.040816
View details for Web of Science ID 000252894700042
View details for PubMedID 17704249
-
HIF-dependent antitumorigenic effect of antioxidants in vivo
CANCER CELL
2007; 12 (3): 230-238
Abstract
The antitumorigenic activity of antioxidants has been presumed to arise from their ability to squelch DNA damage and genomic instability mediated by reactive oxygen species (ROS). Here, we report that antioxidants inhibited three tumorigenic models in vivo. Inhibition of a MYC-dependent human B lymphoma model was unassociated with genomic instability but was linked to diminished hypoxia-inducible factor (HIF)-1 levels in a prolyl hydroxylase 2 and von Hippel-Lindau protein-dependent manner. Ectopic expression of an oxygen-independent, stabilized HIF-1 mutant rescued lymphoma xenografts from inhibition by two antioxidants: N-acetylcysteine and vitamin C. These findings challenge the paradigm that antioxidants diminish tumorigenesis primarily through decreasing DNA damage and mutations and provide significant support for a key antitumorigenic effect of diminishing HIF levels.
View details for DOI 10.1016/j.ccr.2007.08.004
View details for Web of Science ID 000249514500006
View details for PubMedID 17785204
View details for PubMedCentralID PMC2084208
-
Detection of separated analytes in subnanoliter volumes using coaxial thermal lensing
ANALYTICAL CHEMISTRY
2007; 79 (14): 5264-5271
Abstract
A collinear-beam thermal lens detector has been constructed and its properties were characterized. Its application to the high-performance liquid chromatography (HPLC) separation of a mixture of five anthraquinone dyes dissolved in water shows a linear response over 3.5 orders of magnitude and a detection limit that is subnanomolar in the dye concentrations. These results are compared with those obtained previously using cavity ring-down spectroscopy (CRDS) in a Brewster's angle flow cell (Bechtel, K. L.; Zare, R. N.; Kachanov, A. A.; Sanders, S. S.; Paldus, B. A. Anal. Chem. 2005, 77, 1177-1182). The peak-to-peak baseline noise of the thermal lensing detection is 3.5 x 10(-8) absorbance units (AU) with a path length of 200 microm, whereas the peak-to-peak baseline noise of CRDS detection is approximately 2 x 10(-7) AU with a path length of 300 microm. Both of these figures of merit should be compared to the peak-to-peak baseline noise of one of the best commercial UV-vis HPLC detection systems, which is approximately 5 x 10(-6) AU with a path length of 10 mm (1-s integration time). Therefore, the thermal lensing technique has a demonstrated sensitivity of subnanomolar detection that is approximately 140 times better than that of the best commercial UV-vis detector and approximately 5 times better than that of CRDS.
View details for DOI 10.1021/ac0705925
View details for Web of Science ID 000247992600021
View details for PubMedID 17569503
-
Characterization of two types of silanol groups on fused-silica surfaces using evanescent-wave cavity ring-down spectroscopy
ANALYTICAL CHEMISTRY
2007; 79 (10): 3654-3661
Abstract
Evanescent-wave cavity ring-down spectroscopy has been applied to a planar fused-silica surface covered with crystal violet (CV+) cations to characterize the silanol groups indirectly. A radiation-polarization dependence of the adsorption isotherm of CV+ at the CH3CN/silica interface is measured and fit to a two-site Langmuir equation to determine the relative populations of two different types of isolated silanol groups. CV+ binding at type I sites yields a free energy of adsorption of -29.9 +/- 0.2 kJ/mol and a saturation surface density of (7.4 +/- 0.5) x 10(12) cm(-2), whereas the values of -17.9 +/- 0.4 kJ/mol and (3.1 +/- 0.4) x 10(13) cm(-2) are obtained for the type II sites. The CV+ cations, each with a planar area of approximately 120 A2, seem to be aligned randomly while lying over the SiO- type I sites, thereby suggesting that this type of site may be surrounded by a large empty surface area (>480 A2). In contrast, the CV+ cations on a type II sites are restricted with an average angle of approximately 40 degrees tilted off the surface normal, suggesting that the CV+ cations on these sites are grouped closely together. The average tilt angle increases with increasing concentration of crystal violet so that CV+ cations may be separated from each other to minimize the repulsion of nearby CV+ and SiOH sites.
View details for DOI 10.1021/ac062386n
View details for Web of Science ID 000246414400017
View details for PubMedID 17429945
-
Breast and ovarian cancer in relatives of cancer patients, with and without BRCA mutations
CANCER EPIDEMIOLOGY BIOMARKERS & PREVENTION
2006; 15 (2): 359-363
Abstract
First-degree relatives of patients with breast or ovarian cancer have increased risks for these cancers. Little is known about how their risks vary with the patient's cancer site, carrier status for predisposing genetic mutations, or age at cancer diagnosis.We evaluated breast and ovarian cancer incidence in 2,935 female first-degree relatives of non-Hispanic White female patients with incident invasive cancers of the breast (n = 669) or ovary (n = 339) who were recruited from a population-based cancer registry in northern California. Breast cancer patients were tested for BRCA1 and BRCA2 mutations. Ovarian cancer patients were tested for BRCA1 mutations. We estimated standardized incidence ratios (SIR) and 95% confidence intervals (95% CI) for breast and ovarian cancer among the relatives according to the patient's mutation status, cancer site, and age at cancer diagnosis.In families of patients who were negative or untested for BRCA1 or BRCA2 mutations, risks were elevated only for the patient's cancer site. The breast cancer SIR was 1.5 (95% CI, 1.2-1.8) for relatives of breast cancer patients, compared with 1.1 (95% CI, 0.8-1.6) for relatives of ovarian cancer patients (P = 0.12 for difference by patient's cancer site). The ovarian cancer SIR was 0.9 (95% CI, 0.5-1.4) for relatives of breast cancer patients, compared with 1.9 (95% CI, 1.0-4.0) for relatives of ovarian cancer patients (P = 0.04 for difference by site). In families of BRCA1-positive patients, relatives' risks also correlated with the patient's cancer site. The breast cancer SIR was 10.6 (95% CI, 5.2-21.6) for relatives of breast cancer patients, compared with 3.3 (95% CI, 1.4-7.3) for relatives of ovarian cancer patients (two-sided P = 0.02 for difference by site). The ovarian cancer SIR was 7.9 (95% CI, 1.2-53.0) for relatives of breast cancer patients, compared with 11.3 (3.6-35.9) for relatives of ovarian cancer patients (two-sided P = 0.37 for difference by site). Relatives' risks were independent of patients' ages at diagnosis, with one exception: In families ascertained through a breast cancer patient without BRCA mutations, breast cancer risks were higher if the patient had been diagnosed before age 40 years.In families of patients with and without BRCA1 mutations, breast and ovarian cancer risks correlate with the patient's cancer site. Moreover, in families of breast cancer patients without BRCA mutations, breast cancer risk depends on the patient's age at diagnosis. These patterns support the presence of genes that modify risk specific to cancer site, in both carriers and noncarriers of BRCA1 and BRCA2 mutations.
View details for DOI 10.1158/1055-9965.EPI-05-0687
View details for PubMedID 16492929
-
The protective effect of dantrolene on ischemic neuronal cell death is associated with reduced expression of endoplasmic reticulum stress markers
BRAIN RESEARCH
2005; 1048 (1-2): 59-68
Abstract
The endoplasmic reticulum (ER) plays an important role in ischemic neuronal cell death. In order to determine the effect of dantrolene, a ryanodine receptor antagonist, on ER stress response and ischemic brain injury, we investigated changes in ER stress-related molecules, that is phosphorylated form of double-stranded RNA-activated protein kinase (PKR)-like ER kinase (p-PERK), phosphorylated form of eukaryotic initiation factor 2alpha (p-eIF2alpha), activating transcription factor-4 (ATF-4), and C/EBP-homologous protein (CHOP), as well as terminal deoxynucleotidyl transferase-mediated dUTP nick end labeling (TUNEL) in the peri-ischemic area and ischemic core region of rat brain after transient middle cerebral artery occlusion (MCAO). In contrast to the cases treated with vehicle, the infarct volume and TUNEL-positive cells were significantly reduced at 24 h of reperfusion by treatment with dantrolene. The immunoreactivities for p-PERK, p-eIF2alpha, ATF-4, and CHOP were increased at the ischemic peripheral region after MCAO, which were partially inhibited by dantrolene treatment. The present results suggest that dantrolene significantly decreased infarct volume and provided neuroprotective effect on rats after transient MCAO by reducing ER stress-mediated apoptotic signal pathway activation in the ischemic area.
View details for DOI 10.1016/j.brainres.2005.04.058
View details for Web of Science ID 000230259600007
View details for PubMedID 15921666
-
Familial gastrointestinal stromal tumor syndrome: Phenotypic and molecular features in a kindred
JOURNAL OF CLINICAL ONCOLOGY
2005; 23 (12): 2735-2743
Abstract
Members of a family with hereditary gastrointestinal stromal tumors (GISTs) and a germline KIT oncogene mutation were evaluated for other potential syndrome manifestations. A tumor from the proband was analyzed to compare features with sporadic GISTs.Members of a kindred in which six relatives in four consecutive generations comprised an autosomal dominant pattern of documented GISTs and cutaneous lesions underwent physical examination, imaging studies, and germline KIT analysis. A recurrent GIST from the proband was studied using microarray, karyotypic, immunohistochemical, and immunoblotting techniques.In addition to evidence of multiple GISTs, lentigines, malignant melanoma, and an angioleiomyoma were identified in relatives. A previously reported gain-of-function missense mutation in KIT exon 11 (T --> C) that results in a V559A substitution within the juxtamembrane domain was identified in three family members. The proband's recurrent gastric GIST had a 44,XY-14,-22 karyotype and immunohistochemical evidence of strong diffuse cytoplasmic KIT expression without expression of actin, desmin, or S-100. Immunoblotting showed strong expression of phosphorylated KIT and downstream signaling intermediates (AKT and MAPK) at levels comparable with those reported in sporadic GISTs. cDNA array profiling demonstrated clustering with sporadic GISTs, and expression of GIST markers comparable to sporadic GISTs.These studies provide the first evidence that gene expression and mechanisms of cytogenetic progression and cell signaling are indistinguishable in familial and sporadic GISTs. Current investigations of molecularly targeted therapies in GIST patients provide opportunities to increase the understanding of features of the hereditary syndrome, and risk factors and molecular pathways of the neoplastic phenotypes.
View details for DOI 10.1200/JCO.2005.06.009
View details for Web of Science ID 000228563600021
View details for PubMedID 15837988
-
Molecular orientation study of methylene blue at an air/fused-silica interface using evanescent-wave cavity ring-down spectroscopy
JOURNAL OF PHYSICAL CHEMISTRY B
2005; 109 (8): 3330-3333
Abstract
Using evanescent-wave cavity ring-down spectroscopy (EW-CRDS), we monitored the change in the absorbance of a thin film of methylene blue (MB) at an air/fused-silica interface while varying the polarization of the incident light (600 nm). We derived the average orientation angle of the planar MB molecules with respect to the surface normal and observed that the average orientation angle decreases as the surface concentration increases. At low surface concentrations, the MB molecules lie almost flat on the surface, whereas at higher surface concentrations the molecules become vertically oriented.
View details for DOI 10.1021/jp045290a
View details for Web of Science ID 000227247100037
View details for PubMedID 16851361
-
Oral contraceptive use and risk of early-onset breast cancer in carriers and noncarriers of BRCA1 and BRCA2 mutations
CANCER EPIDEMIOLOGY BIOMARKERS & PREVENTION
2005; 14 (2): 350-356
Abstract
Recent oral contraceptive use has been associated with a small increase in breast cancer risk and a substantial decrease in ovarian cancer risk. The effects on risks for women with germ line mutations in BRCA1 or BRCA2 are unclear.Subjects were population-based samples of Caucasian women that comprised 1,156 incident cases of invasive breast cancer diagnosed before age 40 (including 47 BRCA1 and 36 BRCA2 mutation carriers) and 815 controls from the San Francisco Bay area, California, Ontario, Canada, and Melbourne and Sydney, Australia. Relative risks by carrier status were estimated using unconditional logistic regression, comparing oral contraceptive use in case groups defined by mutation status with that in controls.After adjustment for potential confounders, oral contraceptive use for at least 12 months was associated with decreased breast cancer risk for BRCA1 mutation carriers [odds ratio (OR), 0.22; 95% confidence interval (CI), 0.10-0.49; P < 0.001], but not for BRCA2 mutation carriers (OR, 1.02; 95% CI, 0.34-3.09) or noncarriers (OR, 0.93; 95% CI, 0.69-1.24). First use during or before 1975 was associated with increased risk for noncarriers (OR, 1.52 per year of use before 1976; 95% CI, 1.22-1.91; P < 0.001).There was no evidence that use of current low-dose oral contraceptive formulations increases risk of early-onset breast cancer for mutation carriers, and there may be a reduced risk for BRCA1 mutation carriers. Because current formulations of oral contraceptives may reduce, or at least not exacerbate, ovarian cancer risk for mutation carriers, they should not be contraindicated for a woman with a germ line mutation in BRCA1 or BRCA2.
View details for Web of Science ID 000227113800010
View details for PubMedID 15734957
-
Prevalence of BRCA1 mutation carriers among US non-Hispanic Whites
CANCER EPIDEMIOLOGY BIOMARKERS & PREVENTION
2004; 13 (12): 2078-2083
Abstract
Data from several countries indicate that 1% to 2% of Ashkenazi Jews carry a pathogenic ancestral mutation of the tumor suppressor gene BRCA1. However, the prevalence of BRCA1 mutations among non-Ashkenazi Whites is uncertain. We estimated mutation carrier prevalence in U.S. non-Hispanic Whites, specific for Ashkenazi status, using data from two population-based series of San Francisco Bay Area patients with invasive cancers of the breast or ovary, and data on breast and ovarian cancer risks in Ashkenazi and non-Ashkenazi carriers. Assuming that 90% of the BRCA1 mutations were detected, we estimate a carrier prevalence of 0.24% (95% confidence interval, 0.15-0.39%) in non-Ashkenazi Whites, and 1.2% (95% confidence interval, 0.5-2.6%) in Ashkenazim. When combined with U.S. White census counts, these prevalence estimates suggest that approximately 550,513 U.S. Whites (506,206 non-Ashkenazim and 44,307 Ashkenazim) carry germ line BRCA1 mutations. These estimates may be useful in guiding resource allocation for genetic testing and genetic counseling and in planning preventive interventions.
View details for PubMedID 15598764
-
Oxidative damage to the endoplasmic reticulum is implicated in ischemic neuronal cell death
3rd International Conference on the Biology, Chemistry and Therapeutic Applications of Nitric Oxide/4th Annual Meeting of the Nitric-Oxide-Society-of-Japan
ACADEMIC PRESS INC ELSEVIER SCIENCE. 2004: 61–61
View details for Web of Science ID 000224022300117
-
Design of polarization beam splitter in two-dimensional triangular photonic crystals
CHINESE PHYSICS LETTERS
2004; 21 (7): 1285-1288
View details for Web of Science ID 000222542100028
-
Induction of grp78 by ischemic preconditioning reduces endoplasmic reticulum stress and prevents delayed neuronal cell death
5th World Stroke Congress
LIPPINCOTT WILLIAMS & WILKINS. 2004: E249–E249
View details for Web of Science ID 000221676600565
-
Temporal profile of angiogenesis and expression of related genes in the brain after ischemia
5th World Stroke Congress
LIPPINCOTT WILLIAMS & WILKINS. 2004: E250–E250
View details for Web of Science ID 000221676600572
-
Measurement of inclusive momentum spectra and multiplicity distributions of charged particles at root s similar to 2-5 GeV
PHYSICAL REVIEW D
2004; 69 (7)
View details for DOI 10.1103/PhysRevD.69.072002
View details for Web of Science ID 000221253900006
-
Adsorption of crystal violet to the silica-water interface monitored by evanescent wave cavity ring-down spectroscopy
JOURNAL OF PHYSICAL CHEMISTRY B
2003; 107 (29): 7070-7075
View details for DOI 10.1021/jp027636s
View details for Web of Science ID 000184242600022
-
Observation of a near-threshold enhancement in the p(p)over-bar mass spectrum from radiative J/psi ->gamma p(p)over-bar deecays
PHYSICAL REVIEW LETTERS
2003; 91 (2)
Abstract
We observe a narrow enhancement near 2m(p) in the invariant mass spectrum of pp pairs from radiative J/psi-->gammapp decays. No similar structure is seen in J/psi-->pi(0)pp decays. The results are based on an analysis of a 58 x 10(6) event sample of J/psi decays accumulated with the BESII detector at the Beijing electron-positron collider. The enhancement can be fit with either an S- or P-wave Breit-Wigner resonance function. In the case of the S-wave fit, the peak mass is below 2m(p) at M=1859(+3)(-10) (stat)+5-25(syst) MeV/c(2) and the total width is Gamma<30 MeV/c(2) at the 90% confidence level. These mass and width values are not consistent with the properties of any known particle.
View details for DOI 10.1103/PhysRevLett.91.022001
View details for Web of Science ID 000184086000007
-
Search for lepton flavor violation process J/psi -> e mu
PHYSICS LETTERS B
2003; 561 (1-2): 49-54
View details for DOI 10.1016/S0370-2693(03)00391-5
View details for Web of Science ID 000182760200006
-
Measurements of the mass and full-width of the eta c meson
PHYSICS LETTERS B
2003; 555 (3-4): 174-180
View details for DOI 10.1016/S0370-2693(03)00074-1
View details for Web of Science ID 000181126100006
-
Comparison of DNA- and RNA-based methods for detection of truncating BRCA1 mutations
HUMAN MUTATION
2002; 20 (1): 65-73
Abstract
A number of methods are used for mutational analysis of BRCA1, a large multi-exon gene. A comparison was made of five methods to detect mutations generating premature stop codons that are predicted to result in synthesis of a truncated protein in BRCA1. These included four DNA-based methods: two-dimensional gene scanning (TDGS), denaturing high performance liquid chromatography (DHPLC), enzymatic mutation detection (EMD), and single strand conformation polymorphism analysis (SSCP) and an RNA/DNA-based protein truncation test (PTT) with and without complementary 5' sequencing. DNA and RNA samples isolated from 21 coded lymphoblastoid cell line samples were tested. These specimens had previously been analyzed by direct automated DNA sequencing, considered to be the optimum method for mutation detection. The set of 21 cell lines included 14 samples with 13 unique frameshift or nonsense mutations, three samples with two unique splice site mutations, and four samples without deleterious mutations. The present study focused on the detection of protein-truncating mutations, those that have been reported most often to be disease-causing alterations that segregate with cancer in families. PTT with complementary 5' sequencing correctly identified all 15 deleterious mutations. Not surprisingly, the DNA-based techniques did not detect a deletion of exon 22. EMD and DHPLC identified all of the mutations with the exception of the exon 22 deletion. Two mutations were initially missed by TDGS, but could be detected after slight changes in the test design, and five truncating mutations were missed by SSCP. It will continue to be important to use complementary methods for mutational analysis.
View details for DOI 10.1002/humu.10097
View details for Web of Science ID 000176744500008
View details for PubMedID 12112659
-
Comparison of methods for detection of mutations in the BRCA1 gene.
CELL PRESS. 2001: 440–40
View details for Web of Science ID 000171648901507
-
The BES upgrade
NUCLEAR INSTRUMENTS & METHODS IN PHYSICS RESEARCH SECTION A-ACCELERATORS SPECTROMETERS DETECTORS AND ASSOCIATED EQUIPMENT
2001; 458 (3): 627-637
View details for Web of Science ID 000167040900003
-
Participation in the cooperative family registry for breast cancer studies: Issues of informed consent
JOURNAL OF THE NATIONAL CANCER INSTITUTE
2000; 92 (6): 452-456
View details for Web of Science ID 000085794200009
View details for PubMedID 10716962
-
Thickness measurement of submonolayer native oxide films on silicon wafers
SOLID STATE TECHNOLOGY
2000; 43 (2): 87-?
View details for Web of Science ID 000085305000015
-
The Eighth AACR American Cancer Society Award Lecture on Cancer Epidemiology and Prevention - Introduction
CANCER EPIDEMIOLOGY BIOMARKERS & PREVENTION
1999; 8 (8): 649-649
View details for Web of Science ID 000083705300001
-
Measurement of the branching ratios for the decays of Ds(+) to eta pi(+), eta 'pi(+), eta rho(+), and eta 'rho(+)
PHYSICAL REVIEW D
1998; 58 (5)
View details for Web of Science ID 000075748700006
-
Evidence for the leptonic decay D ->mu nu(mu)
PHYSICS LETTERS B
1998; 429 (1-2): 188-194
View details for Web of Science ID 000074694400026
-
Measurement of the branching fractions of Lambda(+)(c)-> p(K)over-bar-n(pi)
PHYSICAL REVIEW D
1998; 57 (7): 4467-4470
View details for Web of Science ID 000072880400076
-
New measurement of B -> D*pi branching fractions
PHYSICAL REVIEW LETTERS
1998; 80 (13): 2762-2766
View details for Web of Science ID 000072757000004
-
Measurement of the total cross section for e(+)e(-)-> hadrons at root s=10.52 GeV
PHYSICAL REVIEW D
1998; 57 (3): 1350-1358
View details for Web of Science ID 000071834800006
-
Measurements of the meson-photon transition form factors of light pseudoscalar mesons at large momentum transfer
PHYSICAL REVIEW D
1998; 57 (1): 33-54
View details for Web of Science ID 000071320700006
-
Direct measurement of B(D-s(+)->phi X+)
PHYSICAL REVIEW D
1998; 57 (1): 28-32
View details for Web of Science ID 000071320700005
-
Measurement of the decay amplitudes and branching fractions of B->J/psi K-* and B->J/psi K decays
PHYSICAL REVIEW LETTERS
1997; 79 (23): 4533-4537
View details for Web of Science ID A1997YK36500005
-
Limit on the two-photon production of the glueball candidate f(J)(2220) at the Cornell electron storage ring
PHYSICAL REVIEW LETTERS
1997; 79 (20): 3829-3833
View details for Web of Science ID A1997YF78600008
-
Study of the decay tau(-)->2 pi(-)pi(+)3 pi(0)nu(tau)
PHYSICAL REVIEW LETTERS
1997; 79 (20): 3814-3818
View details for Web of Science ID A1997YF78600005
-
First observation of inclusive B decays to the charmed strange baryons xi(0)(c) and xi(+)(c)
PHYSICAL REVIEW LETTERS
1997; 79 (19): 3599-3603
View details for Web of Science ID A1997YF18600014
-
Search for the decay tau(-)->4 pi(-)3 pi(+)(pi(0))nu(tau)
PHYSICAL REVIEW D
1997; 56 (9): R5297-R5300
View details for Web of Science ID A1997YF04400001
-
New upper limit on the decay eta->e(+)e(-)
PHYSICAL REVIEW D
1997; 56 (9): 5359-5365
View details for Web of Science ID A1997YF04400007
-
Determination of the Michel parameters and the tau neutrino helicity in tau decay
PHYSICAL REVIEW D
1997; 56 (9): 5320-5329
View details for Web of Science ID A1997YF04400005
-
Observation of exclusive B decays to final states containing a charmed baryon
PHYSICAL REVIEW LETTERS
1997; 79 (17): 3125-3129
View details for Web of Science ID A1997YC78200008
-
Inclusive decays B->DX and B->D*X
PHYSICAL REVIEW D
1997; 56 (7): 3783-3802
View details for Web of Science ID A1997YA57300002
-
First observation of tau->3 pi eta nu(tau) and tau->f(1)pi nu(tau) decays
PHYSICAL REVIEW LETTERS
1997; 79 (13): 2406-2410
View details for Web of Science ID A1997XZ11400005
-
Measurement of the (B)over-bar->Dl(nu)over-bar partial width and form factor parameters
PHYSICAL REVIEW LETTERS
1997; 79 (12): 2208-2212
View details for Web of Science ID A1997XW93900012
-
Lambda(Lambda)over-bar production in two-photon interactions
PHYSICAL REVIEW D
1997; 56 (5): R2485-R2489
View details for Web of Science ID A1997XV62500001
-
Observation of the decay D-s(+)->omega pi(+)
PHYSICAL REVIEW LETTERS
1997; 79 (8): 1436-1440
View details for Web of Science ID A1997XT12400004
-
Search for neutrinoless tau decays involving pi(0) or eta mesons
PHYSICAL REVIEW LETTERS
1997; 79 (7): 1221-1224
View details for Web of Science ID A1997XR28700012
-
Search for the decays B-0->D(*)D+(*)(-)
PHYSICAL REVIEW LETTERS
1997; 79 (5): 799-803
View details for Web of Science ID A1997XN80900007
-
Studies of the Cabibbo-suppressed decays D+->pi(0)l(+)nu and D+->eta e(+)nu(e)
PHYSICS LETTERS B
1997; 405 (3-4): 373-378
View details for Web of Science ID A1997XP69800026
-
Study of gluon versus quark fragmentation in Y->gg gamma and e(+)e(-)->q(q)over-bar gamma events at root s=10 GeV
PHYSICAL REVIEW D
1997; 56 (1): 17-22
View details for Web of Science ID A1997XH85700004
-
Search for B->mu(nu)over-bar(mu)gamma and B->e(nu)over-bar(e)gamma
PHYSICAL REVIEW D
1997; 56 (1): 11-16
View details for Web of Science ID A1997XH85700003
-
A measurement of the Michel parameters in leptonic decays of the tau
PHYSICAL REVIEW LETTERS
1997; 78 (25): 4686-4690
View details for Web of Science ID A1997XJ26900005
-
nu(tau) helicity from h(+/-) energy correlations
PHYSICAL REVIEW D
1997; 55 (11): 7291-7295
View details for Web of Science ID A1997XD01800057
-
Study of the B-0 semileptonic decay spectrum at the Y(4S) resonance
PHYSICS LETTERS B
1997; 399 (3-4): 321-328
View details for Web of Science ID A1997WX03700021
-
Measurement of the direct photon spectrum in Y(1S) decays
PHYSICAL REVIEW D
1997; 55 (9): 5273-5281
View details for Web of Science ID A1997WX51900003
-
Analysis of D+->K-S((0))K+ and D+->K-S((0))pi(+)
PHYSICAL REVIEW LETTERS
1997; 78 (17): 3261-3265
View details for Web of Science ID A1997WW39900009
-
Search for neutrinoless tau decays: tau->e gamma and tau->mu gamma
PHYSICAL REVIEW D
1997; 55 (7): R3919-R3923
View details for Web of Science ID A1997WY65400001
-
Observation of two excited charmed baryons decaying into Lambda(+)(c)pi(+/-)
PHYSICAL REVIEW LETTERS
1997; 78 (12): 2304-2308
View details for Web of Science ID A1997WP51300008
-
Experimental tests of lepton universality in tau decay
PHYSICAL REVIEW D
1997; 55 (5): 2559-2576
View details for Web of Science ID A1997WL50300007
-
Search for phi mesons in tau lepton decay
PHYSICAL REVIEW D
1997; 55 (3): R1119-R1123
View details for Web of Science ID A1997WF29700001
-
First measurement of the B->pi l nu and B->rho(omega)l nu branching fractions
PHYSICAL REVIEW LETTERS
1996; 77 (25): 5000-5004
View details for Web of Science ID A1996VY11100007
-
A search for nonresonant B+->h(+)h(-)h(+) decays
PHYSICAL REVIEW LETTERS
1996; 77 (22): 4503-4507
View details for Web of Science ID A1996VU50200006
-
Measurement of the tau lepton lifetime
PHYSICS LETTERS B
1996; 388 (2): 402-408
View details for Web of Science ID A1996VU29400028
-
Analysis of D-0->K(K)over-bar-X decays
PHYSICAL REVIEW D
1996; 54 (7): 4211-4220
View details for Web of Science ID A1996VM97700004
-
Observation of an excited charmed baryon decaying into Xi(c)(0)pi(+)
PHYSICAL REVIEW LETTERS
1996; 77 (5): 810-813
View details for Web of Science ID A1996UY95200005
-
Search for a vector glueball by a scan of the J/psi resonance
PHYSICAL REVIEW D
1996; 54 (1): 1221-1224
View details for Web of Science ID A1996UV18700059
-
Measurement of the branching fraction for D-s(-)->phi pi(-)
PHYSICS LETTERS B
1996; 378 (1-4): 364-372
View details for Web of Science ID A1996UV05800055
-
Decays of tau leptons to final states containing K-S(0) mesons
PHYSICAL REVIEW D
1996; 53 (11): 6037-6053
View details for Web of Science ID A1996UN90500005
-
First observation of the decay tau(-)->K-eta nu(tau)
PHYSICAL REVIEW LETTERS
1996; 76 (22): 4119-4123
View details for Web of Science ID A1996UM24500005
-
Measurement of the form factors for (B)over-bar(0)->D*(+)l(-)(nu)over-bar
PHYSICAL REVIEW LETTERS
1996; 76 (21): 3898-3902
View details for Web of Science ID A1996UL24700005
-
A measurement of B(D-0->K-pi(+)pi(0))/B(D-0->K-pi(+))
PHYSICS LETTERS B
1996; 373 (4): 334-338
View details for Web of Science ID A1996UJ91100012
-
Limits on flavor changing neutral currents in D-0 meson Decays
PHYSICAL REVIEW LETTERS
1996; 76 (17): 3065-3069
View details for Web of Science ID A1996UF74400006
-
Tau decays into three charged leptons and two neutrinos
PHYSICAL REVIEW LETTERS
1996; 76 (15): 2637-2641
View details for Web of Science ID A1996UE19000009
-
Measurement of the mass of the tau lepton
PHYSICAL REVIEW D
1996; 53 (1): 20-34
View details for Web of Science ID A1996TP72000004
-
Direct measurement of the D-3 branching fraction to phi pi
PHYSICAL REVIEW D
1995; 52 (7): 3781-3784
View details for Web of Science ID A1995TL18200005
-
DIRECT MEASUREMENT OF THE PSEUDOSCALAR DECAY CONSTANT, F(D-S)
PHYSICAL REVIEW LETTERS
1995; 74 (23): 4599-4602
View details for Web of Science ID A1995RB20000010
-
AN EVALUATION OF GENETIC-HETEROGENEITY IN 145 BREAST-CANCER OVARIAN-CANCER FAMILIES
AMERICAN JOURNAL OF HUMAN GENETICS
1995; 56 (1): 254-264
Abstract
The breast-ovary cancer-family syndrome is a dominant predisposition to cancer of the breast and ovaries which has been mapped to chromosome region 17q12-q21. The majority, but not all, of breast-ovary cancer families show linkage to this susceptibility locus, designated BRCA1. We report here the results of a linkage analysis of 145 families with both breast and ovarian cancer. These families contain either a total of three or more cases of early-onset (before age 60 years) breast cancer or ovarian cancer. All families contained at least one case of ovarian cancer. Overall, an estimated 76% of the 145 families are linked to the BRCA1 locus. None of the 13 families with cases of male breast cancer appear to be linked, but it is estimated that 92% (95% confidence interval 76%-100%) of families with no male breast cancer and with two or more ovarian cancers are linked to BRCA1. These data suggest that the breast-ovarian cancer-family syndrome is genetically heterogeneous. However, the large majority of families with early-onset breast cancer and with two or more cases of ovarian cancer are likely to be due to BRCA1 mutations.
View details for Web of Science ID A1995QC10100033
View details for PubMedCentralID PMC1801289
-
MEASUREMENT OF THE MASS OF THE TAU-LEPTON
PHYSICAL REVIEW LETTERS
1992; 69 (21): 3021-3024
View details for Web of Science ID A1992JY87900006